30 May 2024 12:16 am Views - 12
This article mainly focusses on the present status of Artificial Intelligence (AI) and explore how cognition can be integrated to AI systems to push the existing boundaries.
Artificial Intelligence systems have reached an unprecedented level of sophistication and are omnipresent in today’s world, twining into human life and revolutionizing various aspects. These systems, powered by algorithms and vast amounts of data, are reshaping industries, enhancing efficiency, and even influencing social interactions of humans. For example, in the field of education AI can be used to develop personalized learning systems to profile learners and predict their future achievements. AI is transforming the healthcare landscape by identifying new diagnostics, drug discoveries, generating personalized treatment plans and in some cases AI powered robots are assisting surgeons to perform complicated medical procedures. The financial sector has gained much confidence and trust by using AI for fraud detection, algorithmic trading, credit scoring and risk management. AI powered systems have done a complete revamp in the entertainment industry by proving personalized content recommendations. Further, AI generated music and art is pushing the boundaries of creativity of human and machine generated content. It is obvious that all the above stated systems have made human lives more comfortable due to AI integration.
Most of the above systems perform well in pre-defined environments with known parameters and they can surpass humans due to their calculation power. Deep Blue system of IBM which defeated the world Chess Champion Garry Kasparov in 1997 is an example of this type of AI system. Deep Blue system could identify the chess pieces on the chessboard that belong to it as well as the pieces belongs to the opponent to make predictions. Yet, it does not possess any memory capacity to evaluate past movements to make informed future decisions. In real life, usability of AI systems that functions in pre-defined environments have a major limitation due to the requirement of knowing all the domain as well as operational information beforehand for a given scenario, which is usually not practical.
Machine Learning (ML) based AI systems possess a limited memory capacity and are capable of handling complex classification tasks and can use historical data for reasoning and predictions. Advances in ML approaches are mainly due to deep learning revolution. Self-driving vehicles are a classic complex example in this category. In self-driving vehicles various inputs coming from different sensors are combined to identify everything around the vehicle and predict what those objects might do next. Nevertheless, the performance is vulnerable when exposed to outliers or extreme situations. These types of systems are being used in chatbots, virtual assistants and natural language processing systems and resemble the current state of AI.
Above stated types of AI systems need a vast amount of data for training, and they are unable to operate in full scale with accuracy in unpredictable environments. The integrated models in these systems can be retrained for better performance. Nevertheless, if the environment in which the model was trained is changed then retraining from scratch will be required.
Because of all these limitations it is obvious that achieving the elusive goal of AI to develop humanoid robots is still a distant goal.
Imagine, one day if there are self-driving cars that have social awareness, that is, understand the intentions and emotions of pedestrians, cyclists, and other human drivers. These vehicles will be able to adapt their behavior according to the prevailing situation and communicate intentions effectively, simply acting cognitively. Such developments will lead to safer and more harmonious interactions on the road and will make travelling a pleasant activity for everyone. Still, we are far away from achieving this super goal. Yet, AI researchers are continuously exploring various mechanisms to push the boundaries of AI to build more cognitively able AI systems, motivated by observing human cognition.
Cognition is defined as the mental action or process of acquiring knowledge and understanding through thought, experience, and the senses. Simply, cognition can be considered as the ability to perceive from the environment or generate through internal mechanisms and react, process the obtained information, and understand the level of abstraction, store and retrieve information, make decisions and produce appropriate responses. Self-reliance, the ability to speculate situations, independent adaptive and anticipatory actions are innate cognitive abilities of humans.
Basically, cognitively able AI systems can be categorized into three paradigms, namely, cognitivist, emergent and hybrid. In cognitivist paradigm it is assumed that cognition is achieved by computations performed on internal symbolic knowledge, which is provided by the domain expert. In emergent paradigm the goal is to maintain its own autonomy by cognitively understanding the embodied environment and is dependent on the history of interactions or experiences. The hybrid paradigm extracts best of the other two approaches and use symbolic knowledge and representations constructed by the system itself as it interacts and explore the world and use emergent models of perception and action about the world to construct knowledge.
Therefore, it is obvious that if we want to build cognitively able AI systems then we must explore new mechanisms, starting from capturing and storing data to developing novel reasoning methods. These systems will require robust perception capabilities through sensor fusion. Thus, need to have mechanisms to integrate data from various sensors such as cameras, LiDAR, and radar to develop a coherent understanding of the environment. New mechanisms for knowledge representation and reasoning will be required that will extend the limitations in techniques such as knowledge graphs, semantics networks, ontologies, logic-based systems etc.
However, developing cognitively able AI systems is a challenge at present because of the complexity of human cognition itself and the limited awareness that we have on human cognition. To replicate some of the cognitive features such as perception, memory, reasoning, and emotions in machines will require not only advanced algorithms but also a deep understanding of neuroscience and psychology. Further, replicating brain mechanisms in isolation will not be sufficient because the brain and its cognitive capacities is the result of evolution with a specific purpose along with the body in a particular environment and this embodied brain supports generating cognition in a particular context.
When developing cognitively able AI systems it is mandatory to ensure ethical and responsible deployment. As these systems become more capable of understanding and influencing human behavior, issues related to privacy, autonomy, and control too will be aggravated. Addressing these concerns necessitates interdisciplinary collaboration among researchers, policymakers, and ethicists.
In conclusion, integration of cognition into artificial intelligence systems will make a paradigm shift in the field of AI research by pushing the frontiers of what AI can achieve. Despite numerous challenges, both known and unknown, the potential benefits of cognitive AI are immense, providing opportunities for innovation and societal advancements. In the journey of developing cognitively able AI systems, it is crucial to adhere to ethics and align with human values and aspirations as a society.
By
Dr. Menaka Ranasinghe
Senior Lecturer
Department of Electrical and Computer Engineering
The Open University of Sri Lanka
President – Sri Lanka Association for Artificial Intelligence (SLAAI) - 2020