CONNECT WITH US

OmniEyes builds search engine for real world, allowing people to stay on top of city dynamics with real-time images

Fisher Yu, DIGITIMES, Taipei

Today people use their computers or smartphones to access digital maps, find a parking space or check out restaurant recommendations. This is part of people's everyday life now. However, many people may also have had the experience wherein the vacant parking space indicated on their phone was already taken when they got to it or the restaurant of interest to them had closed down six months before their visit. In other words, the city information people have access to is all outdated. In contrast, OmniEyes - Next-Generation Mobile Video Platform enables the creation of a city's real-time image data though the easiest and ready-to-use approach.

Founded by the team led by Chun-Ting Chou, associate professor, Graduate Institute of Communication Engineering, National Taiwan University, as well as Shou-De Lin and Ai-Chun Pang, both professors of the Department of Computer Science and Information Engineering, National Taiwan University, OmniEyes set the goal to bring its research results to reality within three years from the time when it decided to join the Ministry of Science and Technology's Startup program in December 2017. Both the team's devotion and the fact that they have a highly feasible idea allowed their research to reach maturity in late 2018. Then backed by venture capital investments, the team established a startup firm to commercialize their research.

OmniEyes - Next-Generation Mobile Video Platform makes city information more up-to-date and valuable resources more readily available through fog computing. According to Chou, AI has become the most critical trend for the Information Technology (IT) sector and images are the most widely available data today. With dash cameras installed on virtually every vehicle nowadays, the amount of video data is massive. If we are able to make good use of the video data, the possibilities of innovations are endless and we may even be able to reshape digital map data.

Chou takes Google Maps for example. It is the most widely used digital map and its precision is rapidly improving through the collection of massive usage data. However, the city information available on Google Maps is updated every 1.5 to 2 years. In other words, what users see from Google Maps on their mobile phones is old information. OmniEyes, on the other hand, gathers images captured by cameras installed on buses, taxis and delivery trucks on the road such that image data on the back-end platform is refreshed once every five minutes. The information is kept up-to-date so that users can stay on top of city dynamics in real time, rather than fall out of sync with outdated and static information.

It is not easy to achieve OmniEyes' goal. Lin points out dash cameras currently on the market mostly only have recording function. Even if they are added with communication modules to transmit the recorded images to the back-end platform in real time, the transmission of the video files can consume a lot of bandwidth due to their large sizes. Furthermore, 99% of the video footages captured by the vehicles on the road are not useful data. The biggest challenge is how to add lightweight AI capability to dash cameras with limited functionality so that they can filter out useless video data before transmitting them to the back-end platform to save bandwidth.

Prof. Pang pointed out that fog computing and the increasingly popular edge computing work by having the terminal equipment handle a certain amount of the system computing, reducing the system latency and the workloads of the cloud platform and bandwidth. But in contrast to edge computing where the terminal equipment must possess computing power, fog computing extends the scope to include handsets and even dash cameras that offer very low computing capabilities. The coverage of fog computing is therefore much wider. For OmniEyes, dash cameras are its major terminal device.

To address such a challenge for academic research purposes, a high-caliber dash camera may be the solution, notes Chou. However, OmniEyes set its heart on designing a practical and marketable product so it insisted on making use of dash cameras on the market and equipping them with AI capability by implementing feasible algorithms. Through a year of R&D efforts, OmniEyes has enabled dash cameras on 100 buses, 40 taxis and 10 delivery trucks to send image data collected on road trips of up to 10,000 kilometers daily to the back-end platform, making city information available in real time.

Chou proposes a three-phase plan for OmniEyes going forward. For phase 1, OmniEyes endeavors to make its technology ready to use. For example, using cameras already installed on a variety of vehicles is a viable way for OmniEyes' technology to create values. For phase 2, OmniEyes looks to license its technology to manufacturers for them to integrate OmniEyes' software on their automotive devices. For phase 3, OmniEyes will make its image data platform available through a mobile phone app, which consumers can download to access city information in real time. Chou envisions OmniEyes - Next-Generation Mobile Video Platform as a search engine for the real world. Anyone can contribute information to the platform while enjoying access to it. The model of data sharing and co-creation enables more effective use of city resources.