Human beings always have a reason for almost every undertaking. We always try to rationalise our decisions and actions. We have been programmed to make sense of our environment and why we do the things that we do. When something doesn’t make sense, we become confused and disoriented. Can we possibly overcome this phenomenal aspect of human nature?
Rational Artificial Intelligence
During a discussion about artificial intelligence, Been Kim, a Google Brain research scientist, offered an explanation for humanity’s propensity to rationalise everything. An expert of interpretable machine learning, Kim and her team dream of building a self-sustaining AI program. Such software would be capable of explaining itself with little to no human intervention.
While the AI itself is still being finetuned, the scientist managed to come up with a tool that supports AI systems explain how they reached their conclusions. The support tool, named Testing with Concept Activation Vectors (TCAV), has a plugging mechanism. According to Quanta Magazine, once plugged to machine learning algorithms, it will subsequently sift through different factors and data sets. After which, polished results will be generated.
As it grows more prominent each year and tools like TCAV become more in-demand, AI faces intense scrutiny. There were gender and racial controversies surrounding its development (including its training data). Critics are on the lookout for mistakes while still in the development phase. That way it’s easier to address such mistakes (since it’s harder to implement corrective measures once AI reaches its full potential).
What Can Advanced Artifical Intelligence Do?
So what do we mean when we say AI can now explain conclusions? Take for instance a facial recognition algorithm. AI would be able to match facial inputs against a database of individuals (i.e. criminal records, applicant records). The facial recognition algorithm will generate conclusions that people can accept, reject or fix.
Been Kim shared with Quanta Magazine that there is no need for a tool that can completely provide information about how artificial intelligence makes decisions. Currently, technology is good enough for the time being. It can address likely issues and supply insight into humans explaining the possible reasons why something didn’t work.
The concept is similar to warning labels we often see on hazardous activities. Meaning, use AI programs but beware of the risks. Kim said that with this technology, you need to be “careful” so as not to cut your finger. In as far as AI is concerned, scientists need to go down the risky route (even though there are safer routes) but must carefully avoid them. Some routes may be safer but will take forever to generate results. On one hand, there are risky AI routes but the results are faster and you can see more development.
What is Google Brain?
Just a brief introduction, Google Brain is a department comprising of scientists in the Google Headquarters. Their mission is to make intelligent machines that help improve the lives of people. The team’s goal is to construct flexible and self-sustaining learning models (artificial intelligence) with unique features. To do this, they need to gather a lot of data and come up with efficient computations.
Google Brain describes their team approach as something that deals with Deep Learning. They are working towards making a difference and providing practical solutions to problems of primordial importance. As AI and algorithm system experts, their approach lets them create tools to improve and expand ML research. In so doing, the team hopes to unlock the practical value of artificial intelligence and self-sustaining algorithms for the benefit of humanity at large.
Google Brain traces its beginnings to 2011. It used to be a part-time join research program between a Stanford University professor Andrew Ng and Google researchers Greg Corrado and Jeff Dean. Ng spent a lot of time developing deep learning techniques since 2006, considering it a method for cracking AI algorithm problems. Five years later, he collaborated with Corrado and Dean to develop and expand a deep learning system called DistBelief. At the same time, he was also developing a cloud computing infrastructure for Google.
The project became so successful they later incorporated it back to Google. In 2012, Google Brain successfully invented algorithms that can recognise pictures and place them into specific categories (e.g. nature, places, animals).
Click here for the latest technology, SEO services Sydney and digital marketing news.