See

Why

Learn Venue

Learn Venue

Embedding vision AI in low-cost devices.

Embedding vision AI in low-cost devices.

Embedding vision AI in low-cost devices.

You can get feedback like this for free in our app

You can get feedback like this for free in our app

See Why

YC Application Feedback

YC Application Feedback

See Why is an AI tool designed to help you understand why your YC application may have been rejected and what you can do to improve it. By analyzing YC content and top applications, See Why provides actionable feedback, helping you refine your application for better chances of success.

How See Why Can Help You

  • Identify Weaknesses: Understand the specific areas where your application falls short.

  • Actionable Insights: Receive clear, practical advice on how to improve your application.

  • Continuous Improvement: Discover opportunities for ongoing enhancement to stay competitive.

Unsuccessful

YC Application Feedback

YC Application Feedback

Describe what your company does in 50 characters or less.

Embedding vision AI in low-cost devices.

What is your company going to make? Please describe your product and what it does or will do.

We are developing an AI-powered operating system, VisionOS, for low-cost security cameras that will enable them to detect objects such as people, vehicles, or animals in a live video.

Today, these security cameras lack the ability to detect objects in a live video feed as object detection is computationally expensive and the low powered hardware of a security camera makes it impossible to do so.

With our technology, the security camera will be able to run object detection on the device itself and in real time. Our technology is up to 62x faster than the state of the art methods.

Where do you live now, and where would the company be based after YC?

New Delhi, India / New Delhi, India

Founders

Please tell us about an interesting project, preferably outside of class or work, that two or more of you created together. Include urls if possible.

Before working on this idea we were building a chatbot that could answer questions by searching inside videos like: “When was this taught in this lecture?” and it would pinpoint the location in the video when it happened. The demo is available on the website:

https://learnvenue.com/ now trading as https://unrealai.xyz/

We made progress with the chatbot and reached out to the CMO of India’s largest online course provider. He looped in his fellow CXOs and VPs in a video call with us to understand the product. We then had to back out as they literally asked us to log into our servers and show how it’s done.

After this debacle, we reached out to other online course providers and they were all very keen on using this technology but they couldn’t find a prominent use case for it. We then extended this idea to searching real-world objects inside videos such as people, vehicles, or animals. While working on this problem we realized that our computers were painfully slow to analyze videos for objects – this was the premise of this startup.

Please tell us in one or two sentences about something impressive that each founder has built or achieved.

—- Saurabh —-

I have got a patent on the world’s cheapest, $100, Braille printer. I hacked an XY plotter to work as a Braille Printer.

I’ve also pitched a product on a nationally televised show named “Pulse The Venture” on CNN News 18. It was a chatbot that could answer scholastic questions.

I’ve built a startup with 4 fellow students in college and ran it for a year. It was a platform that connected students with vocational training institutes.

—- Nishchal —-

NGOs to impart academic and moral education to 100’s of homeless and underprivileged children in Delhi for 4 years.

I got admitted to the University of Edinburgh, one of the top 20 universities in the world, on the basis of my academic performance and projects in machine learning.

I was awarded Scholarship for Higher Education by CBSE, Govt. of India for scoring top 1% in science stream nationally.

Please tell us about the time you most successfully hacked some (non-computer) system to your advantage.

—- Saurabh —-

Early in 2016, I participated in an event called Startup Weekend. The rules were: participants had to pitch their ideas and they were put to vote. The ideas/people that received the most votes had to form a team with the other participants and work on it.

I pitched my idea but it didn’t make it through the voting stage. I went to the organizers and told them that I want to work on my idea but they just reiterated the rules to me. I persisted and finally, the organizers challenged me that if I can make a team of at least 4 I can work on it.

I took the challenge and started approaching the participants, cajoled them, and successfully built a team of 6 – which was the largest. We became the runners-up of the event and won $500 as well.

—- Nishchal —-

I conducted several workshops during my undergraduate years and faced a trivial problem; we needed separate permission from the higher authorities to conduct them and each permission was valid for 2 months only.

Once I needed the permission for a whole year. I figured out a gap between how the higher and lower levels of management operated in my college: once approved at a higher level, the lower management never questions it. I intentionally wrote the permission letter highlighting the starting date of the workshop on the first page and added a paragraph on the second page stating the requirement for the whole year. Generally, the higher authorities are short on time and skim the first page and take decisions accordingly.

My permission application got approved by the higher management. I went to the lower management and highlighted the part of the requirement for the whole year. They approved it without questioning the duration.

How long have the founders known one another and how did you meet? Have any of the founders not met in person?

We have known each other for almost 7 months. We first came in contact in July ’17 when Nishchal was going to return to India after completing his masters in AI. He was looking for opportunities in startups and discovered this. We discussed and assessed each other for over a month before deciding to work on the opportunity of video intelligence.

Nishchal came back to India in October ‘17 and since then we have been meeting and working together. Within this time period, we have completed a product that can search inside videos and pivoted it to achieving on device intelligence.

Category

Which category best applies to your company?

Artificial Intelligence

Is this application in response to a YC RFS?

Yes

If yes, which one?

AI

Progress

How far along are you?

It will take 2 more months create a functional OS that can be integrated into the security cameras. However, the core technology that enables AI models to run on low-cost devices is working today.

How long have each of you been working on this? How much of that has been full-time? Please explain.

We have been working for almost 4 months together on this. The startup is, however, 2 years old and I along with other team members – most of whom have left – have been working on it before I boarded Nishchal as a Co-Founder. The idea has been pivoted 3 times in those 2 years. Nishchal and I both work full time on this startup and do freelancing for our expenses.

Which of the following best describes your progress?

Prototype Built

How many active users or customers do you have? If you have some particularly valuable customers, who are they? If you're building hardware, how many units have you shipped?

We have demoed the prototype to an IP camera manufacturer in the US and to a local factory owner. The factory owner made us an offer of $250 to install it.

Do you have revenue?

No

If you are applying with the same idea as a previous batch, did anything change? If you applied with a different idea, why did you pivot and what did you learn from the last idea?

NA

If you have already participated or committed to participate in an incubator, "accelerator" or "pre-accelerator" program, please tell us about it.

NA

Idea

Why did you pick this idea to work on? Do you have domain expertise in this area? How do you know people need what you're making?

We stumbled upon this opportunity while we were building a chatbot that answers queries by searching inside videos. We realized that our AI models ran very slowly on our computers. It took around 1.5 Hrs to analyze a 1-minute video. So we decided to venture into research and figure out a way to make it run faster.

Nishchal has core expertise in AI and I have expertise in UX. Plus I possess horizontal skills ranging from deep learning, UI development, managing servers, and backend services. We also are in touch with industry experts and through them, we are understanding about the security/surveillance space.

We have got in touch with an IP camera manufacturer based out of US, and have discussed the possibility of integrating the OS into their hardware. They are pretty keen on our technology and we are in verbal talks with them.

To understand whether the end consumer will benefit from AI-powered surveillance, we got in touch with a local factory that was installing security cameras. We gave them a demo on a raspberry PI that detected people as they appeared in the frame without using the internet. He offered us $250/camera to install the system.

What's new about what you're making? What substitutes do people resort to because it doesn't exist yet (or they don't know about it)?

We are making new AI models that enable low-cost devices to analyze videos on the device itself, thus eliminating the need for relying on Google and Amazon cloud intelligence or investing money in costly computer hardware. Our AI models can run 62x faster than the current state of the art methods.

In the security space, the video is analyzed only post an incident. Which today is done by humans. ‘Netatmo Presence’ is a security camera that can classify objects such as people, car, and animals in a video.

Recently, Amazon announced ‘Deep Lens’ video camera which is also able to “classify” live video feed and they are targeting it at developers. It is a step in the right direction.

However, we must mention that image classification is not the same as object detection and is at least 20x less computationally expensive.

Plus, there are IP cameras in the market from Nest, Blink, Hikvision that can detect motion and then trigger a notification to the owner. However, it cannot differentiate between a motion caused by innocuous objects, such as pets, and intruders.

Who are your competitors, and who might become competitors? Who do you fear most?

Our direct competitor is XNOR.AI. It is a startup based out of Netherlands. We both share similar technology and vision to make the device intelligent rather than the cloud. Google and Amazon Cloud video intelligence also pose a threat indirectly. There is always a chance that Google tries to venture into on-device intelligence. However, we believe that it will be against their established infrastructure and services around cloud intelligence. But we do fear this the most.

What do you understand about your business that other companies in it just don't get?

OEMs do realize that using cloud or an on-premise server is not the best way to analyze video content from all the security cameras that are sold by them. Firstly, there is a perpetual cost associated with using it and secondly, there is a limit to the number of videos streams that can be analyzed at the same time with the limited computational power.

For consumers, if the security cameras were to use the cloud, then the bandwidth required to upload the camera feed would be over 40 GB/day – just to figure out if there were unwanted people or vehicles in the video. Worst if the network breaks down leaving them vulnerable.

Making the security camera do the job of identifying the objects is the best way forward. The camera must be made intelligent in itself. And thanks to our technology, the low-cost devices can now analyze the video in real time without the need for depending on the cloud.

How do or will you make money? How much could you make?

We will be licensing the VisionOS to the OEMs such as Bosch, Panasonic.

We charge $25/ device.

Security cameras sold in the World in ‘16: 52 Million units; 17% CAGR

TAM : 52 Mn x 25$ = $1.3 Bn

SAM (IP cameras): 38 Mn x 25$ = $950 Mn

https://www.sdmmag.com/articles/92407-rise-of-surveillance-camera-installed-base-slows

How will you get users? If your idea is the type that faces a chicken-and-egg problem in the sense that it won't be attractive to users till it has a lot of users (e.g. a marketplace, a dating site, an ad network), how will you overcome that?

Since we are a B2B startup getting the right connection to the OEMs will get us there.

Equity

Have you incorporated, or formed any legal entity (like an LLC) yet?

No

How much money do you spend per month?

500

How much money does your company have in the bank now?

3000

How long is your runway?

6 months

Please provide any other relevant information about the structure or formation of the company.

NA

Legal

Are any of the founders covered by noncompetes or intellectual property agreements that overlap with your project? If so, please explain.

Nishchal writes the code primarily for the on-device intelligence technology. We together research on it. Servers and designs are handled by me. We have one more team member (non-founder) who is developing the OS.

Is there anything else we should know about your company?

People have worked and left the previous pivots of the startups – non-founders. Also, they were not legally our employees.

Others

If you had any other ideas you considered applying with, please list them. One may be something we've been waiting for. Often when we fund people it's to do something they list here and not in the main application.

Our technology can also be used in:

Developing an SDK:

The SDK can be used by mobile developers to harness the power of the mobile device to run computer vision applications.

Augmented Reality:

To detect objects in a live video stream. Think of Google Glass with added object tracking to achieve near ‘Iron Man’ capabilities. Drone OS: Drones are essentially low powered computers and our technology can provide them with much needed visual intelligence data for navigation, object tracking, or object avoidance.

Reducing the number of servers:

If our technology were to be deployed in servers, it will require less number of them to run AI models such as image classification, speech recognition, or video intelligence.

Vision for the blind people:

Once integrated into Google glass type of devices it will deliver much more useful information about the environment to the blind person than current ‘smart’ cane that only tells if ‘something’ is ahead of them.

Please tell us something surprising or amusing that one of you has discovered.

Founderspeak – /faʊndə-spik/, noun

Speech used by founders to get the attention of people and investors at large with an ulterior motive.

“I’m not looking for money, just your feedback.”

Translation: I need your money.

“We have 100s of organic sign-ups.”

Translation: We forced 100s of people to sign up.

Curious

What convinced you to apply to Y Combinator? Did someone encourage you to apply?

I’ve been following YC since 2015. Last year in October, YC held office hours in New Delhi, India. We got selected for it and spoke to Anu Hariharan about the stuff we are building. Those 10 minutes were enough to question ourselves about the startup. She convinced us as to why technology companies proliferate in the Valley. We decided at that time to apply for the next batch (S18).

Recently one of our mentors has advised us to look for acceleration opportunities outside of India as it will be hard for us to find the ecosystem that supports technology startups in India.

Plus, our target market is mostly outside of India and being in the valley will make sense.

How did you hear about Y Combinator?

Last year of my college (2015). Been following it since then.

Describe what your company does in 50 characters or less.

Embedding vision AI in low-cost devices.

What is your company going to make? Please describe your product and what it does or will do.

We are developing an AI-powered operating system, VisionOS, for low-cost security cameras that will enable them to detect objects such as people, vehicles, or animals in a live video.

Today, these security cameras lack the ability to detect objects in a live video feed as object detection is computationally expensive and the low powered hardware of a security camera makes it impossible to do so.

With our technology, the security camera will be able to run object detection on the device itself and in real time. Our technology is up to 62x faster than the state of the art methods.

Where do you live now, and where would the company be based after YC?

New Delhi, India / New Delhi, India

Founders

Please tell us about an interesting project, preferably outside of class or work, that two or more of you created together. Include urls if possible.

Before working on this idea we were building a chatbot that could answer questions by searching inside videos like: “When was this taught in this lecture?” and it would pinpoint the location in the video when it happened. The demo is available on the website:

https://learnvenue.com/ now trading as https://unrealai.xyz/

We made progress with the chatbot and reached out to the CMO of India’s largest online course provider. He looped in his fellow CXOs and VPs in a video call with us to understand the product. We then had to back out as they literally asked us to log into our servers and show how it’s done.

After this debacle, we reached out to other online course providers and they were all very keen on using this technology but they couldn’t find a prominent use case for it. We then extended this idea to searching real-world objects inside videos such as people, vehicles, or animals. While working on this problem we realized that our computers were painfully slow to analyze videos for objects – this was the premise of this startup.

Please tell us in one or two sentences about something impressive that each founder has built or achieved.

—- Saurabh —-

I have got a patent on the world’s cheapest, $100, Braille printer. I hacked an XY plotter to work as a Braille Printer.

I’ve also pitched a product on a nationally televised show named “Pulse The Venture” on CNN News 18. It was a chatbot that could answer scholastic questions.

I’ve built a startup with 4 fellow students in college and ran it for a year. It was a platform that connected students with vocational training institutes.

—- Nishchal —-

NGOs to impart academic and moral education to 100’s of homeless and underprivileged children in Delhi for 4 years.

I got admitted to the University of Edinburgh, one of the top 20 universities in the world, on the basis of my academic performance and projects in machine learning.

I was awarded Scholarship for Higher Education by CBSE, Govt. of India for scoring top 1% in science stream nationally.

Please tell us about the time you most successfully hacked some (non-computer) system to your advantage.

—- Saurabh —-

Early in 2016, I participated in an event called Startup Weekend. The rules were: participants had to pitch their ideas and they were put to vote. The ideas/people that received the most votes had to form a team with the other participants and work on it.

I pitched my idea but it didn’t make it through the voting stage. I went to the organizers and told them that I want to work on my idea but they just reiterated the rules to me. I persisted and finally, the organizers challenged me that if I can make a team of at least 4 I can work on it.

I took the challenge and started approaching the participants, cajoled them, and successfully built a team of 6 – which was the largest. We became the runners-up of the event and won $500 as well.

—- Nishchal —-

I conducted several workshops during my undergraduate years and faced a trivial problem; we needed separate permission from the higher authorities to conduct them and each permission was valid for 2 months only.

Once I needed the permission for a whole year. I figured out a gap between how the higher and lower levels of management operated in my college: once approved at a higher level, the lower management never questions it. I intentionally wrote the permission letter highlighting the starting date of the workshop on the first page and added a paragraph on the second page stating the requirement for the whole year. Generally, the higher authorities are short on time and skim the first page and take decisions accordingly.

My permission application got approved by the higher management. I went to the lower management and highlighted the part of the requirement for the whole year. They approved it without questioning the duration.

How long have the founders known one another and how did you meet? Have any of the founders not met in person?

We have known each other for almost 7 months. We first came in contact in July ’17 when Nishchal was going to return to India after completing his masters in AI. He was looking for opportunities in startups and discovered this. We discussed and assessed each other for over a month before deciding to work on the opportunity of video intelligence.

Nishchal came back to India in October ‘17 and since then we have been meeting and working together. Within this time period, we have completed a product that can search inside videos and pivoted it to achieving on device intelligence.

Category

Which category best applies to your company?

Artificial Intelligence

Is this application in response to a YC RFS?

Yes

If yes, which one?

AI

Progress

How far along are you?

It will take 2 more months create a functional OS that can be integrated into the security cameras. However, the core technology that enables AI models to run on low-cost devices is working today.

How long have each of you been working on this? How much of that has been full-time? Please explain.

We have been working for almost 4 months together on this. The startup is, however, 2 years old and I along with other team members – most of whom have left – have been working on it before I boarded Nishchal as a Co-Founder. The idea has been pivoted 3 times in those 2 years. Nishchal and I both work full time on this startup and do freelancing for our expenses.

Which of the following best describes your progress?

Prototype Built

How many active users or customers do you have? If you have some particularly valuable customers, who are they? If you're building hardware, how many units have you shipped?

We have demoed the prototype to an IP camera manufacturer in the US and to a local factory owner. The factory owner made us an offer of $250 to install it.

Do you have revenue?

No

If you are applying with the same idea as a previous batch, did anything change? If you applied with a different idea, why did you pivot and what did you learn from the last idea?

NA

If you have already participated or committed to participate in an incubator, "accelerator" or "pre-accelerator" program, please tell us about it.

NA

Idea

Why did you pick this idea to work on? Do you have domain expertise in this area? How do you know people need what you're making?

We stumbled upon this opportunity while we were building a chatbot that answers queries by searching inside videos. We realized that our AI models ran very slowly on our computers. It took around 1.5 Hrs to analyze a 1-minute video. So we decided to venture into research and figure out a way to make it run faster.

Nishchal has core expertise in AI and I have expertise in UX. Plus I possess horizontal skills ranging from deep learning, UI development, managing servers, and backend services. We also are in touch with industry experts and through them, we are understanding about the security/surveillance space.

We have got in touch with an IP camera manufacturer based out of US, and have discussed the possibility of integrating the OS into their hardware. They are pretty keen on our technology and we are in verbal talks with them.

To understand whether the end consumer will benefit from AI-powered surveillance, we got in touch with a local factory that was installing security cameras. We gave them a demo on a raspberry PI that detected people as they appeared in the frame without using the internet. He offered us $250/camera to install the system.

What's new about what you're making? What substitutes do people resort to because it doesn't exist yet (or they don't know about it)?

We are making new AI models that enable low-cost devices to analyze videos on the device itself, thus eliminating the need for relying on Google and Amazon cloud intelligence or investing money in costly computer hardware. Our AI models can run 62x faster than the current state of the art methods.

In the security space, the video is analyzed only post an incident. Which today is done by humans. ‘Netatmo Presence’ is a security camera that can classify objects such as people, car, and animals in a video.

Recently, Amazon announced ‘Deep Lens’ video camera which is also able to “classify” live video feed and they are targeting it at developers. It is a step in the right direction.

However, we must mention that image classification is not the same as object detection and is at least 20x less computationally expensive.

Plus, there are IP cameras in the market from Nest, Blink, Hikvision that can detect motion and then trigger a notification to the owner. However, it cannot differentiate between a motion caused by innocuous objects, such as pets, and intruders.

Who are your competitors, and who might become competitors? Who do you fear most?

Our direct competitor is XNOR.AI. It is a startup based out of Netherlands. We both share similar technology and vision to make the device intelligent rather than the cloud. Google and Amazon Cloud video intelligence also pose a threat indirectly. There is always a chance that Google tries to venture into on-device intelligence. However, we believe that it will be against their established infrastructure and services around cloud intelligence. But we do fear this the most.

What do you understand about your business that other companies in it just don't get?

OEMs do realize that using cloud or an on-premise server is not the best way to analyze video content from all the security cameras that are sold by them. Firstly, there is a perpetual cost associated with using it and secondly, there is a limit to the number of videos streams that can be analyzed at the same time with the limited computational power.

For consumers, if the security cameras were to use the cloud, then the bandwidth required to upload the camera feed would be over 40 GB/day – just to figure out if there were unwanted people or vehicles in the video. Worst if the network breaks down leaving them vulnerable.

Making the security camera do the job of identifying the objects is the best way forward. The camera must be made intelligent in itself. And thanks to our technology, the low-cost devices can now analyze the video in real time without the need for depending on the cloud.

How do or will you make money? How much could you make?

We will be licensing the VisionOS to the OEMs such as Bosch, Panasonic.

We charge $25/ device.

Security cameras sold in the World in ‘16: 52 Million units; 17% CAGR

TAM : 52 Mn x 25$ = $1.3 Bn

SAM (IP cameras): 38 Mn x 25$ = $950 Mn

https://www.sdmmag.com/articles/92407-rise-of-surveillance-camera-installed-base-slows

How will you get users? If your idea is the type that faces a chicken-and-egg problem in the sense that it won't be attractive to users till it has a lot of users (e.g. a marketplace, a dating site, an ad network), how will you overcome that?

Since we are a B2B startup getting the right connection to the OEMs will get us there.

Equity

Have you incorporated, or formed any legal entity (like an LLC) yet?

No

How much money do you spend per month?

500

How much money does your company have in the bank now?

3000

How long is your runway?

6 months

Please provide any other relevant information about the structure or formation of the company.

NA

Legal

Are any of the founders covered by noncompetes or intellectual property agreements that overlap with your project? If so, please explain.

Nishchal writes the code primarily for the on-device intelligence technology. We together research on it. Servers and designs are handled by me. We have one more team member (non-founder) who is developing the OS.

Is there anything else we should know about your company?

People have worked and left the previous pivots of the startups – non-founders. Also, they were not legally our employees.

Others

If you had any other ideas you considered applying with, please list them. One may be something we've been waiting for. Often when we fund people it's to do something they list here and not in the main application.

Our technology can also be used in:

Developing an SDK:

The SDK can be used by mobile developers to harness the power of the mobile device to run computer vision applications.

Augmented Reality:

To detect objects in a live video stream. Think of Google Glass with added object tracking to achieve near ‘Iron Man’ capabilities. Drone OS: Drones are essentially low powered computers and our technology can provide them with much needed visual intelligence data for navigation, object tracking, or object avoidance.

Reducing the number of servers:

If our technology were to be deployed in servers, it will require less number of them to run AI models such as image classification, speech recognition, or video intelligence.

Vision for the blind people:

Once integrated into Google glass type of devices it will deliver much more useful information about the environment to the blind person than current ‘smart’ cane that only tells if ‘something’ is ahead of them.

Please tell us something surprising or amusing that one of you has discovered.

Founderspeak – /faʊndə-spik/, noun

Speech used by founders to get the attention of people and investors at large with an ulterior motive.

“I’m not looking for money, just your feedback.”

Translation: I need your money.

“We have 100s of organic sign-ups.”

Translation: We forced 100s of people to sign up.

Curious

What convinced you to apply to Y Combinator? Did someone encourage you to apply?

I’ve been following YC since 2015. Last year in October, YC held office hours in New Delhi, India. We got selected for it and spoke to Anu Hariharan about the stuff we are building. Those 10 minutes were enough to question ourselves about the startup. She convinced us as to why technology companies proliferate in the Valley. We decided at that time to apply for the next batch (S18).

Recently one of our mentors has advised us to look for acceleration opportunities outside of India as it will be hard for us to find the ecosystem that supports technology startups in India.

Plus, our target market is mostly outside of India and being in the valley will make sense.

How did you hear about Y Combinator?

Last year of my college (2015). Been following it since then.

Describe what your company does in 50 characters or less.

Embedding vision AI in low-cost devices.

What is your company going to make? Please describe your product and what it does or will do.

We are developing an AI-powered operating system, VisionOS, for low-cost security cameras that will enable them to detect objects such as people, vehicles, or animals in a live video.

Today, these security cameras lack the ability to detect objects in a live video feed as object detection is computationally expensive and the low powered hardware of a security camera makes it impossible to do so.

With our technology, the security camera will be able to run object detection on the device itself and in real time. Our technology is up to 62x faster than the state of the art methods.

Where do you live now, and where would the company be based after YC?

New Delhi, India / New Delhi, India

Founders

Please tell us about an interesting project, preferably outside of class or work, that two or more of you created together. Include urls if possible.

Before working on this idea we were building a chatbot that could answer questions by searching inside videos like: “When was this taught in this lecture?” and it would pinpoint the location in the video when it happened. The demo is available on the website:

https://learnvenue.com/ now trading as https://unrealai.xyz/

We made progress with the chatbot and reached out to the CMO of India’s largest online course provider. He looped in his fellow CXOs and VPs in a video call with us to understand the product. We then had to back out as they literally asked us to log into our servers and show how it’s done.

After this debacle, we reached out to other online course providers and they were all very keen on using this technology but they couldn’t find a prominent use case for it. We then extended this idea to searching real-world objects inside videos such as people, vehicles, or animals. While working on this problem we realized that our computers were painfully slow to analyze videos for objects – this was the premise of this startup.

Please tell us in one or two sentences about something impressive that each founder has built or achieved.

—- Saurabh —-

I have got a patent on the world’s cheapest, $100, Braille printer. I hacked an XY plotter to work as a Braille Printer.

I’ve also pitched a product on a nationally televised show named “Pulse The Venture” on CNN News 18. It was a chatbot that could answer scholastic questions.

I’ve built a startup with 4 fellow students in college and ran it for a year. It was a platform that connected students with vocational training institutes.

—- Nishchal —-

NGOs to impart academic and moral education to 100’s of homeless and underprivileged children in Delhi for 4 years.

I got admitted to the University of Edinburgh, one of the top 20 universities in the world, on the basis of my academic performance and projects in machine learning.

I was awarded Scholarship for Higher Education by CBSE, Govt. of India for scoring top 1% in science stream nationally.

Please tell us about the time you most successfully hacked some (non-computer) system to your advantage.

—- Saurabh —-

Early in 2016, I participated in an event called Startup Weekend. The rules were: participants had to pitch their ideas and they were put to vote. The ideas/people that received the most votes had to form a team with the other participants and work on it.

I pitched my idea but it didn’t make it through the voting stage. I went to the organizers and told them that I want to work on my idea but they just reiterated the rules to me. I persisted and finally, the organizers challenged me that if I can make a team of at least 4 I can work on it.

I took the challenge and started approaching the participants, cajoled them, and successfully built a team of 6 – which was the largest. We became the runners-up of the event and won $500 as well.

—- Nishchal —-

I conducted several workshops during my undergraduate years and faced a trivial problem; we needed separate permission from the higher authorities to conduct them and each permission was valid for 2 months only.

Once I needed the permission for a whole year. I figured out a gap between how the higher and lower levels of management operated in my college: once approved at a higher level, the lower management never questions it. I intentionally wrote the permission letter highlighting the starting date of the workshop on the first page and added a paragraph on the second page stating the requirement for the whole year. Generally, the higher authorities are short on time and skim the first page and take decisions accordingly.

My permission application got approved by the higher management. I went to the lower management and highlighted the part of the requirement for the whole year. They approved it without questioning the duration.

How long have the founders known one another and how did you meet? Have any of the founders not met in person?

We have known each other for almost 7 months. We first came in contact in July ’17 when Nishchal was going to return to India after completing his masters in AI. He was looking for opportunities in startups and discovered this. We discussed and assessed each other for over a month before deciding to work on the opportunity of video intelligence.

Nishchal came back to India in October ‘17 and since then we have been meeting and working together. Within this time period, we have completed a product that can search inside videos and pivoted it to achieving on device intelligence.

Category

Which category best applies to your company?

Artificial Intelligence

Is this application in response to a YC RFS?

Yes

If yes, which one?

AI

Progress

How far along are you?

It will take 2 more months create a functional OS that can be integrated into the security cameras. However, the core technology that enables AI models to run on low-cost devices is working today.

How long have each of you been working on this? How much of that has been full-time? Please explain.

We have been working for almost 4 months together on this. The startup is, however, 2 years old and I along with other team members – most of whom have left – have been working on it before I boarded Nishchal as a Co-Founder. The idea has been pivoted 3 times in those 2 years. Nishchal and I both work full time on this startup and do freelancing for our expenses.

Which of the following best describes your progress?

Prototype Built

How many active users or customers do you have? If you have some particularly valuable customers, who are they? If you're building hardware, how many units have you shipped?

We have demoed the prototype to an IP camera manufacturer in the US and to a local factory owner. The factory owner made us an offer of $250 to install it.

Do you have revenue?

No

If you are applying with the same idea as a previous batch, did anything change? If you applied with a different idea, why did you pivot and what did you learn from the last idea?

NA

If you have already participated or committed to participate in an incubator, "accelerator" or "pre-accelerator" program, please tell us about it.

NA

Idea

Why did you pick this idea to work on? Do you have domain expertise in this area? How do you know people need what you're making?

We stumbled upon this opportunity while we were building a chatbot that answers queries by searching inside videos. We realized that our AI models ran very slowly on our computers. It took around 1.5 Hrs to analyze a 1-minute video. So we decided to venture into research and figure out a way to make it run faster.

Nishchal has core expertise in AI and I have expertise in UX. Plus I possess horizontal skills ranging from deep learning, UI development, managing servers, and backend services. We also are in touch with industry experts and through them, we are understanding about the security/surveillance space.

We have got in touch with an IP camera manufacturer based out of US, and have discussed the possibility of integrating the OS into their hardware. They are pretty keen on our technology and we are in verbal talks with them.

To understand whether the end consumer will benefit from AI-powered surveillance, we got in touch with a local factory that was installing security cameras. We gave them a demo on a raspberry PI that detected people as they appeared in the frame without using the internet. He offered us $250/camera to install the system.

What's new about what you're making? What substitutes do people resort to because it doesn't exist yet (or they don't know about it)?

We are making new AI models that enable low-cost devices to analyze videos on the device itself, thus eliminating the need for relying on Google and Amazon cloud intelligence or investing money in costly computer hardware. Our AI models can run 62x faster than the current state of the art methods.

In the security space, the video is analyzed only post an incident. Which today is done by humans. ‘Netatmo Presence’ is a security camera that can classify objects such as people, car, and animals in a video.

Recently, Amazon announced ‘Deep Lens’ video camera which is also able to “classify” live video feed and they are targeting it at developers. It is a step in the right direction.

However, we must mention that image classification is not the same as object detection and is at least 20x less computationally expensive.

Plus, there are IP cameras in the market from Nest, Blink, Hikvision that can detect motion and then trigger a notification to the owner. However, it cannot differentiate between a motion caused by innocuous objects, such as pets, and intruders.

Who are your competitors, and who might become competitors? Who do you fear most?

Our direct competitor is XNOR.AI. It is a startup based out of Netherlands. We both share similar technology and vision to make the device intelligent rather than the cloud. Google and Amazon Cloud video intelligence also pose a threat indirectly. There is always a chance that Google tries to venture into on-device intelligence. However, we believe that it will be against their established infrastructure and services around cloud intelligence. But we do fear this the most.

What do you understand about your business that other companies in it just don't get?

OEMs do realize that using cloud or an on-premise server is not the best way to analyze video content from all the security cameras that are sold by them. Firstly, there is a perpetual cost associated with using it and secondly, there is a limit to the number of videos streams that can be analyzed at the same time with the limited computational power.

For consumers, if the security cameras were to use the cloud, then the bandwidth required to upload the camera feed would be over 40 GB/day – just to figure out if there were unwanted people or vehicles in the video. Worst if the network breaks down leaving them vulnerable.

Making the security camera do the job of identifying the objects is the best way forward. The camera must be made intelligent in itself. And thanks to our technology, the low-cost devices can now analyze the video in real time without the need for depending on the cloud.

How do or will you make money? How much could you make?

We will be licensing the VisionOS to the OEMs such as Bosch, Panasonic.

We charge $25/ device.

Security cameras sold in the World in ‘16: 52 Million units; 17% CAGR

TAM : 52 Mn x 25$ = $1.3 Bn

SAM (IP cameras): 38 Mn x 25$ = $950 Mn

https://www.sdmmag.com/articles/92407-rise-of-surveillance-camera-installed-base-slows

How will you get users? If your idea is the type that faces a chicken-and-egg problem in the sense that it won't be attractive to users till it has a lot of users (e.g. a marketplace, a dating site, an ad network), how will you overcome that?

Since we are a B2B startup getting the right connection to the OEMs will get us there.

Equity

Have you incorporated, or formed any legal entity (like an LLC) yet?

No

How much money do you spend per month?

500

How much money does your company have in the bank now?

3000

How long is your runway?

6 months

Please provide any other relevant information about the structure or formation of the company.

NA

Legal

Are any of the founders covered by noncompetes or intellectual property agreements that overlap with your project? If so, please explain.

Nishchal writes the code primarily for the on-device intelligence technology. We together research on it. Servers and designs are handled by me. We have one more team member (non-founder) who is developing the OS.

Is there anything else we should know about your company?

People have worked and left the previous pivots of the startups – non-founders. Also, they were not legally our employees.

Others

If you had any other ideas you considered applying with, please list them. One may be something we've been waiting for. Often when we fund people it's to do something they list here and not in the main application.

Our technology can also be used in:

Developing an SDK:

The SDK can be used by mobile developers to harness the power of the mobile device to run computer vision applications.

Augmented Reality:

To detect objects in a live video stream. Think of Google Glass with added object tracking to achieve near ‘Iron Man’ capabilities. Drone OS: Drones are essentially low powered computers and our technology can provide them with much needed visual intelligence data for navigation, object tracking, or object avoidance.

Reducing the number of servers:

If our technology were to be deployed in servers, it will require less number of them to run AI models such as image classification, speech recognition, or video intelligence.

Vision for the blind people:

Once integrated into Google glass type of devices it will deliver much more useful information about the environment to the blind person than current ‘smart’ cane that only tells if ‘something’ is ahead of them.

Please tell us something surprising or amusing that one of you has discovered.

Founderspeak – /faʊndə-spik/, noun

Speech used by founders to get the attention of people and investors at large with an ulterior motive.

“I’m not looking for money, just your feedback.”

Translation: I need your money.

“We have 100s of organic sign-ups.”

Translation: We forced 100s of people to sign up.

Curious

What convinced you to apply to Y Combinator? Did someone encourage you to apply?

I’ve been following YC since 2015. Last year in October, YC held office hours in New Delhi, India. We got selected for it and spoke to Anu Hariharan about the stuff we are building. Those 10 minutes were enough to question ourselves about the startup. She convinced us as to why technology companies proliferate in the Valley. We decided at that time to apply for the next batch (S18).

Recently one of our mentors has advised us to look for acceleration opportunities outside of India as it will be hard for us to find the ecosystem that supports technology startups in India.

Plus, our target market is mostly outside of India and being in the valley will make sense.

How did you hear about Y Combinator?

Last year of my college (2015). Been following it since then.

See Why

Feedback

Feedback

Executive Summary

The application presents a startup, Learn Venue, that aims to revolutionize low-cost security cameras by embedding vision AI technology capable of object detection in real time. The founders claim their technology, VisionOS, is 62x faster than state-of-the-art methods and is near completion, targeting integration with existing camera hardware OEMs. Their financial model is based on licensing the software to camera manufacturers and they project a significant Total Addressable Market (TAM) based on security camera sales figures. The team consists of two founders with complementary skills in AI and UX, backed by significant previous entrepreneurial and academic accomplishments.

Investment Thesis Relevance

Learn Venue's vision of infusing low-cost devices with AI is closely aligned with YC's interest in disruptive technologies and AI applications. Their targeted niche within the security camera market addresses a clear need for more intelligent systems and fits within the growing trend towards edge computing, where local devices handle complex processing tasks. However, the application does not clearly delineate any proprietary technology or significant barriers to entry which could deter competitors like Google or Amazon from entering this space.

Recommendation: The team should emphasize any unique IP or partnerships that would make their solution difficult to replicate.

Market Understanding and Strategy

The application shows an understanding of the security camera market and acknowledges the need for post-incident video analysis. They have identified pain points associated with cloud-based analysis, such as bandwidth costs and computational limits. However, they may be underestimating the speed at which larger companies could pivot to on-device intelligence or the possibility that their technology becomes a standard feature integrated into new camera models, thus eliminating the need for a separate OS license.

Advice: Further validation with more OEMs and end consumers is recommended, alongside a contingency plan for market response once larger competitors notice their entry.

Business Model Evaluation

The licensing model has potential, but it relies on the assumption that OEMs will prefer an external OS rather than developing their own solution. The charging model, at $25 per device, seems arbitrary and lacks a breakdown of how this figure was determined. There is a clear opportunity here, but broader adoption hinges on competitive pricing and proving that the performance gains translate to real-world benefits for users.

Critique: Substantiate the pricing model with a cost-benefit analysis for OEMs and explore alternative revenue streams to mitigate risks associated with a singular licensing model.

Team Competency and Dynamics

The founders' backgrounds in AI, UX, and entrepreneurship instill confidence in their capability to execute their vision. Notably, Saurabh's experience with a national television pitch and a patent, alongside Nishchal's scholarship and academic accolades, are impressive. However, the team's relatively short acquaintance of 7 months may raise concerns about their long-term dynamics.

Recommendation: The application should include a discussion of how the founders' partnership has evolved to handle disagreements and strategic decisions to assure YC of their long-term stability.

Operational Efficiency and Milestone Achievement

The actual operational strategy remains vague. While they are two months away from creating a functional OS, the application does not lay out clear steps or milestones to reach market readiness. It’s also unclear how their operational efficiency is maintained, considering the founders also freelance to cover expenses.

Advice: Detail a clear timetable of milestones, including testing, OEM partnerships, and go-to-market strategy. Also discuss the steps taken to ensure that freelancing does not detract from startup growth.

Use of Language

The application is articulate and largely avoids jargon, presenting complex technological concepts in an accessible manner. Terms like "VisionOS" and "AI-powered operating system" are used effectively without in-depth explanations that might burden the reader. The founders manage to convey their vision succinctly, which is beneficial for capturing the attention of YC reviewers.

Feedback: The narrative would benefit from clearer explanations of terms like "on-device intelligence" and comparisons between object detection and image classification to educate readers unfamiliar with these distinctions.

Financial Health and Projections

The financials given are speculative and based on projected licensing fees. The application does well in giving plausible figures based on market size but may suffer from optimism regarding adoption rates. They also identify a total addressable market and serviceable available market but lack detail on how they arrived at their $25 licensing fee per device.

Concern: The projections may be overly optimistic, and the application would benefit from a more detailed breakdown of market research and price structure.

Constructive Criticism and Advice

While the vision is ambitious and well-aligned with YC’s interest in AI and technology, the application could benefit from further clarification on competitive advantage, IP protections, pricing strategy, a clearer operational plan with concrete milestones, and evidence of commitment from OEMs. Also, asserting the team's cohesion despite the short association could help alleviate potential concerns about their partnership.

Actionable Recommendations:

  1. Clearly articulate the competitive edge and any IP protections.

  2. Lay out a detailed pricing strategy that justifies the $25/device fee.

  3. Provide a more concrete operational strategy with specific milestones.

  4. Confirm interest or pre-commitments from OEMs to demonstrate market validation.

  5. Offer insight into the founders’ partnership stability and conflict resolution mechanisms.

Final Thoughts

Learn Venue has the potential to make an impact in the security camera industry with its visionary AI technology, but success will likely hinge on executing a strategic approach to market penetration, competition, and financial planning. Addressing these concerns will be key to convincing YC of their readiness for investment and their capacity to thrive in a competitive AI landscape.

Executive Summary

The application presents a startup, Learn Venue, that aims to revolutionize low-cost security cameras by embedding vision AI technology capable of object detection in real time. The founders claim their technology, VisionOS, is 62x faster than state-of-the-art methods and is near completion, targeting integration with existing camera hardware OEMs. Their financial model is based on licensing the software to camera manufacturers and they project a significant Total Addressable Market (TAM) based on security camera sales figures. The team consists of two founders with complementary skills in AI and UX, backed by significant previous entrepreneurial and academic accomplishments.

Investment Thesis Relevance

Learn Venue's vision of infusing low-cost devices with AI is closely aligned with YC's interest in disruptive technologies and AI applications. Their targeted niche within the security camera market addresses a clear need for more intelligent systems and fits within the growing trend towards edge computing, where local devices handle complex processing tasks. However, the application does not clearly delineate any proprietary technology or significant barriers to entry which could deter competitors like Google or Amazon from entering this space.

Recommendation: The team should emphasize any unique IP or partnerships that would make their solution difficult to replicate.

Market Understanding and Strategy

The application shows an understanding of the security camera market and acknowledges the need for post-incident video analysis. They have identified pain points associated with cloud-based analysis, such as bandwidth costs and computational limits. However, they may be underestimating the speed at which larger companies could pivot to on-device intelligence or the possibility that their technology becomes a standard feature integrated into new camera models, thus eliminating the need for a separate OS license.

Advice: Further validation with more OEMs and end consumers is recommended, alongside a contingency plan for market response once larger competitors notice their entry.

Business Model Evaluation

The licensing model has potential, but it relies on the assumption that OEMs will prefer an external OS rather than developing their own solution. The charging model, at $25 per device, seems arbitrary and lacks a breakdown of how this figure was determined. There is a clear opportunity here, but broader adoption hinges on competitive pricing and proving that the performance gains translate to real-world benefits for users.

Critique: Substantiate the pricing model with a cost-benefit analysis for OEMs and explore alternative revenue streams to mitigate risks associated with a singular licensing model.

Team Competency and Dynamics

The founders' backgrounds in AI, UX, and entrepreneurship instill confidence in their capability to execute their vision. Notably, Saurabh's experience with a national television pitch and a patent, alongside Nishchal's scholarship and academic accolades, are impressive. However, the team's relatively short acquaintance of 7 months may raise concerns about their long-term dynamics.

Recommendation: The application should include a discussion of how the founders' partnership has evolved to handle disagreements and strategic decisions to assure YC of their long-term stability.

Operational Efficiency and Milestone Achievement

The actual operational strategy remains vague. While they are two months away from creating a functional OS, the application does not lay out clear steps or milestones to reach market readiness. It’s also unclear how their operational efficiency is maintained, considering the founders also freelance to cover expenses.

Advice: Detail a clear timetable of milestones, including testing, OEM partnerships, and go-to-market strategy. Also discuss the steps taken to ensure that freelancing does not detract from startup growth.

Use of Language

The application is articulate and largely avoids jargon, presenting complex technological concepts in an accessible manner. Terms like "VisionOS" and "AI-powered operating system" are used effectively without in-depth explanations that might burden the reader. The founders manage to convey their vision succinctly, which is beneficial for capturing the attention of YC reviewers.

Feedback: The narrative would benefit from clearer explanations of terms like "on-device intelligence" and comparisons between object detection and image classification to educate readers unfamiliar with these distinctions.

Financial Health and Projections

The financials given are speculative and based on projected licensing fees. The application does well in giving plausible figures based on market size but may suffer from optimism regarding adoption rates. They also identify a total addressable market and serviceable available market but lack detail on how they arrived at their $25 licensing fee per device.

Concern: The projections may be overly optimistic, and the application would benefit from a more detailed breakdown of market research and price structure.

Constructive Criticism and Advice

While the vision is ambitious and well-aligned with YC’s interest in AI and technology, the application could benefit from further clarification on competitive advantage, IP protections, pricing strategy, a clearer operational plan with concrete milestones, and evidence of commitment from OEMs. Also, asserting the team's cohesion despite the short association could help alleviate potential concerns about their partnership.

Actionable Recommendations:

  1. Clearly articulate the competitive edge and any IP protections.

  2. Lay out a detailed pricing strategy that justifies the $25/device fee.

  3. Provide a more concrete operational strategy with specific milestones.

  4. Confirm interest or pre-commitments from OEMs to demonstrate market validation.

  5. Offer insight into the founders’ partnership stability and conflict resolution mechanisms.

Final Thoughts

Learn Venue has the potential to make an impact in the security camera industry with its visionary AI technology, but success will likely hinge on executing a strategic approach to market penetration, competition, and financial planning. Addressing these concerns will be key to convincing YC of their readiness for investment and their capacity to thrive in a competitive AI landscape.

Executive Summary

The application presents a startup, Learn Venue, that aims to revolutionize low-cost security cameras by embedding vision AI technology capable of object detection in real time. The founders claim their technology, VisionOS, is 62x faster than state-of-the-art methods and is near completion, targeting integration with existing camera hardware OEMs. Their financial model is based on licensing the software to camera manufacturers and they project a significant Total Addressable Market (TAM) based on security camera sales figures. The team consists of two founders with complementary skills in AI and UX, backed by significant previous entrepreneurial and academic accomplishments.

Investment Thesis Relevance

Learn Venue's vision of infusing low-cost devices with AI is closely aligned with YC's interest in disruptive technologies and AI applications. Their targeted niche within the security camera market addresses a clear need for more intelligent systems and fits within the growing trend towards edge computing, where local devices handle complex processing tasks. However, the application does not clearly delineate any proprietary technology or significant barriers to entry which could deter competitors like Google or Amazon from entering this space.

Recommendation: The team should emphasize any unique IP or partnerships that would make their solution difficult to replicate.

Market Understanding and Strategy

The application shows an understanding of the security camera market and acknowledges the need for post-incident video analysis. They have identified pain points associated with cloud-based analysis, such as bandwidth costs and computational limits. However, they may be underestimating the speed at which larger companies could pivot to on-device intelligence or the possibility that their technology becomes a standard feature integrated into new camera models, thus eliminating the need for a separate OS license.

Advice: Further validation with more OEMs and end consumers is recommended, alongside a contingency plan for market response once larger competitors notice their entry.

Business Model Evaluation

The licensing model has potential, but it relies on the assumption that OEMs will prefer an external OS rather than developing their own solution. The charging model, at $25 per device, seems arbitrary and lacks a breakdown of how this figure was determined. There is a clear opportunity here, but broader adoption hinges on competitive pricing and proving that the performance gains translate to real-world benefits for users.

Critique: Substantiate the pricing model with a cost-benefit analysis for OEMs and explore alternative revenue streams to mitigate risks associated with a singular licensing model.

Team Competency and Dynamics

The founders' backgrounds in AI, UX, and entrepreneurship instill confidence in their capability to execute their vision. Notably, Saurabh's experience with a national television pitch and a patent, alongside Nishchal's scholarship and academic accolades, are impressive. However, the team's relatively short acquaintance of 7 months may raise concerns about their long-term dynamics.

Recommendation: The application should include a discussion of how the founders' partnership has evolved to handle disagreements and strategic decisions to assure YC of their long-term stability.

Operational Efficiency and Milestone Achievement

The actual operational strategy remains vague. While they are two months away from creating a functional OS, the application does not lay out clear steps or milestones to reach market readiness. It’s also unclear how their operational efficiency is maintained, considering the founders also freelance to cover expenses.

Advice: Detail a clear timetable of milestones, including testing, OEM partnerships, and go-to-market strategy. Also discuss the steps taken to ensure that freelancing does not detract from startup growth.

Use of Language

The application is articulate and largely avoids jargon, presenting complex technological concepts in an accessible manner. Terms like "VisionOS" and "AI-powered operating system" are used effectively without in-depth explanations that might burden the reader. The founders manage to convey their vision succinctly, which is beneficial for capturing the attention of YC reviewers.

Feedback: The narrative would benefit from clearer explanations of terms like "on-device intelligence" and comparisons between object detection and image classification to educate readers unfamiliar with these distinctions.

Financial Health and Projections

The financials given are speculative and based on projected licensing fees. The application does well in giving plausible figures based on market size but may suffer from optimism regarding adoption rates. They also identify a total addressable market and serviceable available market but lack detail on how they arrived at their $25 licensing fee per device.

Concern: The projections may be overly optimistic, and the application would benefit from a more detailed breakdown of market research and price structure.

Constructive Criticism and Advice

While the vision is ambitious and well-aligned with YC’s interest in AI and technology, the application could benefit from further clarification on competitive advantage, IP protections, pricing strategy, a clearer operational plan with concrete milestones, and evidence of commitment from OEMs. Also, asserting the team's cohesion despite the short association could help alleviate potential concerns about their partnership.

Actionable Recommendations:

  1. Clearly articulate the competitive edge and any IP protections.

  2. Lay out a detailed pricing strategy that justifies the $25/device fee.

  3. Provide a more concrete operational strategy with specific milestones.

  4. Confirm interest or pre-commitments from OEMs to demonstrate market validation.

  5. Offer insight into the founders’ partnership stability and conflict resolution mechanisms.

Final Thoughts

Learn Venue has the potential to make an impact in the security camera industry with its visionary AI technology, but success will likely hinge on executing a strategic approach to market penetration, competition, and financial planning. Addressing these concerns will be key to convincing YC of their readiness for investment and their capacity to thrive in a competitive AI landscape.