Double Deep Machine Learning
نویسنده
چکیده
Very important breakthroughs in data-centric machine learning algorithms led to impressive performance in ‘transactional’ point applications such as detecting anger in speech, alerts from a Face Recognition system, or EKG interpretation. Nontransactional applications, e.g. medical diagnosis beyond the EKG results, require AI algorithms that integrate deeper and broader knowledge in their problem-solving capabilities, e.g. integrating knowledge about anatomy and physiology of the heart with EKG results and additional patient’s findings. Similarly, for military aerial interpretation, where knowledge about enemy doctrines on force composition and spread helps immensely in situation assessment beyond image recognition of individual objects. An initiative is proposed to build Wikipedia for Smart Machines, meaning target readers are not human, but rather smart machines. Named ReKopedia, the goal is to develop methodologies, tools, and automatic algorithms to convert humanity knowledge that we all learn in schools, universities and during our professional life into Reusable Knowledge structures that smart machines can use in their inference algorithms. Ideally, ReKopedia would be an open source shared knowledge repository similar to the well-known shared open source software code repositories. The Double Deep Learning approach advocates integrating data-centric machine self-learning techniques with machineteaching techniques to leverage the power of both and overcome their corresponding limitations. For illustration, an outline of a $15M project is described to produce ReKo knowledge modules for medical diagnosis of about 1,000 disorders. AI applications that are based solely on data-centric machine learning algorithms are typically point solutions for transactional tasks that do not lend themselves to automatic generalization beyond the scope of the data sets they are based on. Today’s AI industry is fragmented, and we are not establishing broad and deep enough foundations that will enable us to build higher level ‘generic’, ‘universal’ intelligence, let alone ‘super-intelligence’. We must find ways to create synergies between these fragments and connect them with external knowledge sources, if we wish to scale faster the AI industry. Examples in the article are based onor inspired byreal-life non-transactional AI systems I deployed over decades of AI career that benefit hundreds of millions of people around the globe. We are now in the second AI ‘spring’ after a long AI ‘winter’. To avoid sliding again into an AI winter, it is essential that we rebalance the roles of data and knowledge. Data is important but knowledgedeep and commonsenseare equally important.
منابع مشابه
A Hybrid Optimization Algorithm for Learning Deep Models
Deep learning is one of the subsets of machine learning that is widely used in Artificial Intelligence (AI) field such as natural language processing and machine vision. The learning algorithms require optimization in multiple aspects. Generally, model-based inferences need to solve an optimized problem. In deep learning, the most important problem that can be solved by optimization is neural n...
متن کاملA Hybrid Optimization Algorithm for Learning Deep Models
Deep learning is one of the subsets of machine learning that is widely used in Artificial Intelligence (AI) field such as natural language processing and machine vision. The learning algorithms require optimization in multiple aspects. Generally, model-based inferences need to solve an optimized problem. In deep learning, the most important problem that can be solved by optimization is neural n...
متن کاملA Hybrid Algorithm based on Deep Learning and Restricted Boltzmann Machine for Car Semantic Segmentation from Unmanned Aerial Vehicles (UAVs)-based Thermal Infrared Images
Nowadays, ground vehicle monitoring (GVM) is one of the areas of application in the intelligent traffic control system using image processing methods. In this context, the use of unmanned aerial vehicles based on thermal infrared (UAV-TIR) images is one of the optimal options for GVM due to the suitable spatial resolution, cost-effective and low volume of images. The methods that have been prop...
متن کاملGlobal Warming: New Frontier of Research Deep Learning- Age of Distributed Green Smart Microgrid
The exponential increase in carbon-dioxide resulting Global Warming would make the planet earth to become inhabitable in many parts of the world with ensuing mass starvation. The rise of digital technology all over the world fundamentally have changed the lives of humans. The emerging technology of the Internet of Things, IoT, machine learning, data mining, biotechnology, biometric, and deep le...
متن کاملDeep Double Sparsity Encoder: Learning to Sparsify Not Only Features But Also Parameters
This paper emphasizes the significance to jointly exploit the problem structure and the parameter structure, in the context of deep modeling. As a specific and interesting example, we describe the deep double sparsity encoder (DDSE), which is inspired by the double sparsity model for dictionary learning. DDSE simultaneously sparsifies the output features and the learned model parameters, under ...
متن کاملZipML: Training Linear Models with End-to-End Low Precision, and a Little Bit of Deep Learning
Recently there has been significant interest in training machine-learning models at low precision: by reducing precision, one can reduce computation and communication by one order of magnitude. We examine training at reduced precision, both from a theoretical and practical perspective, and ask: is it possible to train models at end-to-end low precision with provable guarantees? Can this lead to...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
- CoRR
دوره abs/1711.06517 شماره
صفحات -
تاریخ انتشار 2017