THIS IS WORK IN PROGRESS
Many countries explore ways to get not disconnected from the AI development even though the risks and opportunities of AI are highly controversial.
Since several of us are working in the Artificial Intelligence space for quite a while, we decided to put up an initial framework for country leaders. We believe that only a program developed in concert with other stakeholder has a good chance to be accepted by all.
It is in collaboration with our ambassadors in roughly 20 countries. More helping hands are very welcome. We will be updating this program on the go – leaving the previous versions as blog posts here. Theoretic AI is developing rather quickly and so should be this framework.
AI Leadership Framework
The “AI leadership framework” is a project in development. We are working with a large number of great minds from over 20 countries to postulate a framework that can be used as is, or as a base for a country’s own AI Leadership Strategy. While we see the risks that AI has, we see the grander opportunity for humanity. And if nothing at all, in the next 50 years we will learn more about ourselves than during the entire lifespan of humanity.
1) Data Awareness
Data is a new natural resource for the highly developed world. What was Gold, Coal, Oil, even healthy soil and so forth in the past, is data in our immediate future. There is an abundance of data. The key is to intelligently harvest the overwhelming amount of available data. Being aware of the fact, that we all are creating new data, billions of data every single day, is the first step to create policies, infrastructure to protect – but also motivation to share those date. Like in previous technology developments, the US has been in a leadership position early on. Data power houses like Google, Amazon, Facebook, the credit card organizations and telecommunication companies, already sit on data like no other country.
Country leaders should understand: Data are generated in almost every country, including emerging countries and even some sub emerging countries. Not being able to use those data in a secure and privacy protected way is a major obstacle in a countries development. Natural resources just got an additional position in their list.
2) Access to data
The more data can be assessed the better the results from AI systems. In particular in the early days of any AI development, it is critical to get access to an enormous amount of data. Enormous means several hundred thousand records about the same topic. Over regulated and fearful societies and those with fear driven privacy campaigns are clearly in a disadvantages position. Amazon has access to trillions of dollars worth of shopping from past years. Together with data usage from their AWS business unit Amazon is one of the top player globally. Google sits on quadrillion of data points from all the searches and knows what people look for, why, what problems they have, what products they search,what illness they have and so forth. Facebook, unlike the other two has much more social interaction related data. The recent collaboration with credit card institutions provide Facebook a hot mix of social and commercial data and relations. Of course privacy is an issue but not using the data is like having hectares of apple trees but letting the apples rotten in the ground.
Country leaders should recognize that blocking data access due to the inability to protect the owner’s privacy of those data brings those countries behind. Retraining a population that ‘it is OK now’ is a daunting task.
3) Engineering
AI is no longer an IT discipline. AI requires top level mathematical knowledge, understanding of neural networks, understanding of reasoning and decision making processes, understanding in cognitive behavior, awareness know how, and much more. Image and speech recognition skills are simply a given. The vast majority looks at AI as a software code that is written by humans and can do at best as well as the programmer. That an AI system can not only perform computational processes but also build intelligent correlations, make then decisions and outperform humans in almost every predictable project is foreign to most of the postulation. The algorithms (mathematical procedures) allow AI systems to learn from hundreds of thousands of situations, then construct millions of situations similar to those they learned from and have a richer set of “experience” than any human ever being able to accumulate. While this is good – we need to understand the implications.
Country leaders should know that no longer only engineers are needed to develop leading AI applications but also Social Scientists, Biologists, Mathematician, human behavior experts and related skills.
4) General Education
In past major development shifts, the education of what’s happening was more or less based on the effort of the businesses who produced and marketed those new technologies. However the possible impact is too significant to only hope the education will work out. If 50% of jobs will be irreplaceable eliminated, we need concepts and education how who this will not turn out as a catastrophe. We need to explore options in advance and educate those who have been educated to look for work and do what others tell them to do. what opportunities are out there in this new world. For the past 300,000 years humans have been taking care of themselves, have been rather autonomous and been creative to survive. In the past 200 years that has changed. The main job was to look for work and just do it. We are in a way going back to more autonomy and self determination. While this is an amazing and positive development, for many people it is on open void they may not be able to fill by themselves. Education and guidance is a key aspect of leadership – always was.
Country leader should seriously consider a greater makeover in their respective education systems. Mechanical learning of Physics, Math, Chemistry, Biology, History, Languages, is no longer enough. And as our knowledge base now doubles every year and the learning capacity of the young already crossed the limits, we need to consider actually reducing the amount of the content in the core classes and add data technology, society and political rights and responsibilities as these topics require a better understanding of everybody.
5) Language Empowerment
The world’s information is by and large stored in English language. Even the use of the term ‘Artificial Intelligence’ is important to be held in English and not in a local language. One of the key success aspects of AI is the openness and willingness to cooperate globally – that means by default in English. Leadership by ‘closed shop’, and keeping everything close to ones chest and secretive will never lead to superiority.
Country leaders should seriously consider reviewing the language learning in each country. Even though it is part of the general education, it has a very special position in the AI development. No longer it is only about the globally universal language of code writing, there is also the new language of system interaction. Already today AI components outperform any language on earth with more than 1 to 100 relative to English. Roughly 200 French Language ‘Skills” (applications) or close to 3,000 German skills versus more than 10,000 English skills.
6) Culture & Failure Tolerance
Whatever a team is creating, it is critical to get the early prototype rapidly into the market and work with massive amounts of actions. AI cannot be only tested in a lab – it needs to be tested in the market. The European or Asian perfectionism is counterproductive to AI development and will automatically make those actors fall behind. Google’s AI project that was used to learn about battery usage in Android smart phones moved within six months from concept to being used globally. We can learn only from errors. We will not learn from a perfectly working solutions. What sounds stupid to many engineers has been proven right over and over again. The motto Fail and fail fast is even more important in AI development.
Country leaders may want to consider driving an initiative that makes making mistakes an act of learning and progress rather than an act of failing. Chinese president Xi Jinping addressed Chinese entrepreneurs early 2018: ‘Making mistakes is the best way to learn fast’.
7) Transparency
Due to the competitive nature of any new technology, we most likely will never be able to find out who is working on what. Our brains are still the most private part of our lives – and hopefully will be forever. Yet, we should develop a work ethic that calls for voluntary transparency on the Matter of AI development. Now, recent incidents demonstrated that a business may not trust its own government because of its sheer power. A government should not even attempt to play a controlling role in the transparency question. And since trust is one of the key issues in that, maybe a system like we know from the blockchain development maybe a future option. Another reason why we should keep governments out of this role is the lack of trust among governments themselves. The current suggestion is that AI scientists select a consortium of trusted, anonymous third parties who get tasked to oversee the development as such and analyses the potential risks – with no authority power.
Country leaders may want to consider endorsing such an engagement, maybe send an observer but not ‘control’ it in any aspect. The control should instead lay in relevant and meaningful AI safety policies and law enforcement activities.
8) Taxation
There is a good potential that Autonomous Machines (AI driven systems) could wipe out 50% of the ordinary jobs by 2100 (plus/minus 50 years). Any given society would not recover from such a situation without well thought out planning. One concept maybe that each robot may be charged with a tax equal to one employee. That “AM-Tax” can then be used to fill a fund for unconditional base income or similar system. While the company would still have to purchase such a machine, it can quickly amortize the investment with this artificial employee not taking vacation, not needing social extras and possibly can simply rented as needed. Several scenarios can be found in “World with no work”.
Country leaders should be way ahead of time working on models for their respective society. That includes considering the competition from outside their country. The AM-Tax might be on conjunction with the production output (revenue) or operative savings (cost) they produce or save.
9) AI / AM imports
When importing AI from other countries we need to be aware that this may inherit a certain danger. Those machines may not comply with a country’s Safety & Privacy roles. Data maybe abused or used illegally and more. Importing such machines is a similar subject like importing weapons.
Country leaders should consider to establish early on AM (Autonomous Machine) Import rules and most importantly taxes in line with their overall AI or AM taxation.
10) Safety & Privacy Policies
Another key aspect of the leadership framework is the existence of the “Safety Trio”:
1) Privacy & Data Policies
2) Criminal Acts Policy
3) Human Protection Policies
10.1) Privacy & Data policies
need to be established to not only protect the general population from misuse of data but also education about the consequences of not providing data. Data as such needs to be structured beyond the act of privacy protection but also the protection of data, that belong to the creator of those data. Yet, it needs to be understood that data can either belong to one person or legal entity in its entirety or belong to a whole group pf people or entities. The complexity and permutation of data needs to be carefully explored on the go.
10.2) Criminal Acts Policy
Most likely the act of hacking systems and stealing data, corrupting data or systems or destroying either data, systems or network infrastructure will want to be set as a serious criminal act. As systems get more sophisticated they also can cause more damage than in the past and can bring people and a whole country into serious trouble. AI leadership needs to demonstrate its sensitivity for protection while at the same time its openness for sharing data. As such, hacking needs to be elevated to a serious criminal act with substantial penalties. The criminals act policy should also expanded into political treaties where country leaders agree on respecting the criminal acts policies of each other.
10.3) Human Protection Policy
There is s great level of fear, that AI based Robots, so called autonomous machines, may be at risk of harming humans. Right or wrong, AI leadership requires responsiveness and offers a solution. A possible method is to legally enforce that every autonomous machine will need to have a mechanism to be remotely shut off by especially selected authorities and/or mechanisms. A structure like the domain name service where systems are distributed around the globe may also be applicable to distribute the shut of mechanism. Robots without such a mechanism maybe considered illegal and by order of the law destroyed and the creator and owner penalized. The instantiation of the ‘Criminal Acts Policy’ is in particular important for this ‘Human Protection Policy’.
SUMMARY
As stated in the beginning, the AI Leadership Framework is in an early stage and far from being complete. But in the interest of transparency also of our work, we wanted to share where we are at this stage and what we are working on. Suggestions may change into very different directions based on inputs and the work we are doing going forward.