A brief account of Computer vision and how it is seeping into our lifestyle

The other day I was traveling with my friend in his car. Driving through the jam-packed roads at a snail’s pace, he apprises me in great detail about the umpteen number of cars on the road and the number of new cars being registered each day amounting to the traffic

I heard of the driverless car, he said. A car that is autonomous and decides when and where to stop on its own. Is that even possible?

Computer vision, I answer.

I was completely oblivious to the fact that the conversation was going to take a technical turn and how it will impact our lifestyle. What started as a fun chat was getting needlessly stern. The conversation, though, did not end up with a concrete conclusion but it did end up leaving me in a storm of thoughts.

The rate at which technology is evolving is unprecedented and the computers are being trained to the point that they become self-sufficient.

It’s the age of Siri, Alexa, and Google Assistant. These are the AI- enabled digital assistants that you must definitely be aware of.

The power of machines to see

One of the greatest breakthroughs in technology is to bestow computers with the power of the human visual system, known as machine vision or computer vision – a subfield of AI.

I am sure you must have amused yourself by sticking your tongue out, framing the rabbit ear or the flower headgear on your head while using Snapchat filters. If so, then you are not untouched by computer vision.

The understanding of computer vision among the general mass looks fairly sketchy. If I am to ditch the technical jargon and have to put it in simple English, computer vision is nothing but mimicking the human eye, the human visual cortex and the way human eye responds to visual data.

Machines have been taught to extract and interpret images in a similar fashion as human eyes do. Where humans have the ability to connect the dots to past experiences, machines are deprived of this prerogative.

AI and the very notion of computer vision have been around the corners since the 1940s. In his work, As We May Think (1945), Vannevar Bush put forward the idea of a system that can amplify people’s own knowledge and understanding.

Five years down the line Alan Turing in his paper laid the foundation to the concept of machines embracing the ability to simulate like human beings, think like one and do intelligent things (like playing chess). In the recent decade. the availability of powerful computers, inexpensive cameras, algorithm improvements, and understanding of vision systems proved to be a catalyst and hence, tremendous progress in this field.

The phenomenal technology of computer vision is penetrating into our lifestyle. Let’s have a look at how.

The advancement in computer vision against the backdrop of today’s muddled-up world is revolutionizing the human lifestyle.

A product of computer vision, that many are aware of is the self-driving cars or driverless cars. Tech giants like Tesla, Waymo and Uber are pouring in money for the R&D. Mobileye, an Israel based company acquired by Intel in 2017 for $15.3 billion, at the Consumer Electronics Show in Las Vegas on January 2019 announced about partnering with the Beijing Public Transport Corporation to introduce autonomous driving technology. You can expect these level 4 autonomy vehicles to hit the streets of Beijing by 2022. Nuro, a  robotics company based out of California is revolutionizing the way local goods are transported. This autonomous delivery startup has raised a whopping amount of $940 million from SoftBank Vision Fund.

While the automotive industry is in the full swing to automate vehicles, retail stores are no behind. Amazon unveiled its first cashierless store – Amazon Go – to the general public in January 2018. Various start-ups and retail chains, across the globe, also followed the suit. Why go beyond the borders when we have one in our own backyard. A first of its kind cashierless store in India – Watasale – was opened at Kochin, Kerala. Operating in an area of 500 square feet, the autonomous store has funding offers from Mitsui & Co Ltd. – one of the largest general trading companies of Japan. With an estimation of Amazon Go store touching a $4.5 billion in 2021, cashierless retail stores are definitely the next big thing in the retail industry. While I learn more and more about Amazon, I feel they are on a spree to revolutionize the brick and mortar retail sector. If you, just like me, hate to hop into a countless number of dresses in the trial room, then the answer to your problem is the Amazon’s Virtual Mirror.

The Virtual Mirror, for me is a blend of computer vision with some quickness. And that’s exactly what Virtu has come up with. Vivid, their flagship program to transform the brick and mortar retail stores in India will eventually redeem the trial rooms obsolete. Have a look for yourself how a virtual mirror curated by Virtu will help you see yourself in an attire effortlessly, without you having to change.

Computer vision has not only touched the contemporary lifestyle but has also embedded its roots in the healthcare arena and is slowly occupying the operating space. The idea of creating a 3D model of a tumor and delineating the exact borders between the unwanted tissues and the healthy ones, with the application of computer vision, will tremendously help doctors in having a good insight about where to apply the treatment. Various multinational companies, startups, and health institutions are working independently and in collaboration towards achieving this goal. One of the best examples of computer vision application in radiology is the FDA approved Microsoft’s InnerEye that can detect tumors from images. However, there are various challenges to overcome and the major one is the creation of large datasets. According to a report by NCBI, data from a single institution will not be sufficient enough to create a flawless model. Lifting up the barrier towards the sourcing of data from multiple institutions is what need to be addressed.

For surgeons in the operating room, one of the crucial focus lies in computing the blood loss during surgeries, which largely relies on interpretations. To emphasize and resolve the issue, Gauss Surgical has devised a computer vision application, Triton, for real-time blood loss monitoring. The company has recently raised $20M in a series C funding from SoftBank Ventures Korea, Northwell Health among others to further upgrade Triton platform. Triton analyzes images of surgical sponges and with the help of machine learning reckons the blood loss to utmost accuracy. Till date, more than 50 hospitals have embraced this FDA approved technology and has been proved to a boon for doctors who otherwise relied upon their own visual estimations.

I can perhaps see computer vision spreading almost in every sphere of our life. It has even taken the sports domain under its ambit. The International Gymnastics Federation will introduce computer vision-driven “Judging Support System” to assist judges in Tokyo Olympics 2020. The system that is under development since 2016 by Fujitsu will be employed at the FIG gymnastics events, 2019 which will be further used at 2019 FIG World Cup Series and will be subsequently used in Tokyo Olympics 2020. 3-D images of the gymnast’s body will be created by the system using lidar technology. Later, the images will be analyzed by an AI algorithm system – skeleton recognition technology, as Fujitsu calls it – to calculate the angles of various joints of the athlete. The processed information will ultimately be carried forward to judges.

Here is not so good news for proxy lovers. Deviating from the old approach of maintaining attendance registers, organizations switched to the biometric attendance capturing method. The time is not far when the biometric will be obsolete and we embrace the new automatic attendance management system that detects faces and automatically records the attendance. For me, this was something which was hard to digest and even harder when I learned that the day this technology seeps into our institutions and organizations is not so far. Aindra, an AI powdered MedTech company, is all set to revolutionize Government schools in Tamil Nadu, India with its smart attendance system enabled with computer vision and machine learning.

I believe technology doesn’t truly stand for its agenda of humankind until it does not benefit the lower strata of society. And here we are – computer vision for the agriculture sector. Blue River Technology, a pioneer in improving the agriculture ecosystem using computer vision is transforming the way herbicides and fertilizers are used. The company, acquired by John Deere in a $305 million deal, with robotic technology “see and spray” consolidated with computer vision, is making a stride towards the optimal usage of herbicides and fertilizers. The technology is found to cut down the use of chemicals by more than 90%. This helps keep a check on the rampant use of chemicals in agriculture as well as help farmers save costing in this department.

Data privacy and other concerns need to be highlighted

A technology, whether old or new, has its own downside. Take for example the driverless cars. On March 18, 2018, a car from Uber’s self-driving fleet ramped onto a pedestrian in Arizona which resulted in her death. Such fatal incidents raise questions about how safe computer vision is. According to the recent American Automobile Association (AAA) survey, 71% of Americans were scared of using driverless cars.

One of the major concerns of computer vision is data privacy. For a robust model, a large number of images need to be fed into the machines. Consumption of such a large number of images and data will certainly, at some point, lead to the data privacy breach.

Although stringent laws are laid down for safeguarding the data privacy, the innate ambiguity of data safety still prevail. How information and data are collected, observed and interpreted is a major cause of concern. A step in the right direction is that humans should not ever visually see footage having images of a person, rather should only perceive processed outputs.

For computer vision, while the possible risk of image and video data breach is a bane, it at the same time stands out to be a boon for businesses who rely on images and videos to generate leads. A business’s digital marketing strategy uses a lot of images and videos, and computer vision can now safeguard these avenues. ZeroFOX, a cybersecurity company, in February 2019 announced a new AI and computer vision based technology which will help keep potential threats to a company’s post, images, video, and websites at bay.

The bridge to a new terrain

When asked where did he see technology heading in the future, in an interview back in 2001, Bill Gates answered: “The advance of technology is based on making it fit in so that you don’t really even notice it, so it’s part of everyday life.”

No doubt, the AI-powered computer vision is bridging the gap between what prevails and what the future holds. With the research, development, and investment at its full swing in this domain, the future will see a transformation across sectors. What is under focus from the developers perspective is rendering the technology to be cost-effective and make it accessible to the general mass. The technology will quietly keep on integrating with the users need. For that to happen, the technology will thrive on an enormous number of data and images and hence, data concerns are inevitable. In the near future, from retail stores to operating theaters, infrastructure to agriculture, the entire lifestyle spectrum will be swathed with computer vision.

Lekha Nair

One thought on “A brief account of Computer vision and how it is seeping into our lifestyle

  • bhavana

    amazing post

Leave a Reply