AI and Neuro Ethics
Spirit of our AI and Neuro Ethics
This document is created to support our customers and people affected by our work to understand better the opportunities and risks artificial intelligence and neurotechnology can bear. As with every profoundly new technology, it is hard to predict how it will evolve in general and with companies developing such technology in particular. This is why we are committed to being especially careful using technology and techniques that could potentially be a threat to humans and our general environment. At the same time, we need to continue researching and developing new ways to advance the opportunities we all co-create.
We assume that Artificial Intelligence will be a fast-growing and equally fast-changing technology segment for at least a decade and, more likely, beyond our current century. With the advancement of AI, neuroscience will significantly advance, and so will neuro techniques and neurotechnology. We also assume that AI-specific computer science and neuroscience have just begun. It will affect all aspects of human development independent of the current state. Furthermore, we assume we are on the brink of an Intellectual Revolution, where we let machines realize our ideas. Similar to the agricultural revolution around 10,000 B.C., where humans began to produce food, and the industrial revolution around 1800 B.C., developing machines and new materials. And finally, we assume that the nature-given ingenuity of human beings aims to make biological life exceed a planet’s lifespan.
We are committed to providing the best possible ethics-standard and protection. We are supporting the European Artificial Intelligence Act regarding practical and other recommendations based on Asian, European, and US Governments and countries in which we are operating.
One of the first hindrances in providing guidance is the fact that Artificial Intelligence is almost impossible to define. That is because science does not have a unified definition of intelligence, referring to human intelligence. Today Artificial Intelligence is a bucket of techniques and technologies such as Machine Learning (ML), Natural Language Processing (NLM), Generative Artificial Intelligence (GAI), Generative Pre-trained Transformers (GPT), Large Language Models (LLM), and many more techniques that lead to an AI solution. Then there is AGI, Artificial General Intelligence, which is often used to predict doomsday scenarios where a “Singularity” of knowledge and intelligence potentially wipes out humanity. And here closes the loop: As long as we assume that our brain is nothing more than a memory of things and a processor that can calculate, reason, and make decisions we significantly underestimate the function of our brain. The same is applicable for neuroscience. The majority of the research of neuroscience is focused on the pathologic events in our brain. Little is researched how we create purpose, understanding our own purpose, the function of the multiple layers of consciousness as instrument of our ingenuity and ways to advance – even understanding why and how our desire to advance is structured and how and why change is rejected by an equally large number of people.
EU Pyramid of Risks
To put BlueCallom into the perspective of the European’s Artificial Intelligence Act and its “Pyramid of Risks”, we state the following:
- We are not falling under the HIGH RISK category because we are not using AI in any of described situations:
Systems used as a safety component of a product or falling under EU health and safety harmonization legislation (e.g. toys, aviation, cars, medical devices, lifts).
Systems deployed in eight specific areas identified in Annex III, which the
Commission could update as necessary through delegated acts (Article 7):
o Biometric identification and categorization of natural persons;
o Management and operation of critical infrastructure;
o Education and vocational training;
o Employment, worker management and access to self-employment;
o Access to and enjoyment of essential private services and public services and benefits;
o Law enforcement;
o Migration, asylum, and border control management;
o Administration of justice and democratic processes
2. We do fall under the “Limited risk and Transparency obligations” class because we interact with humans via a chatbot comparable techniques. We do generate images and audio. We do not use any emotion recognition or biometric categorization.
For full transparency: Images are created for added visualization of text content. Audio is used to interact with users to respond to questions or to help navigate the system, similar to the interaction with GPS systems in cars. We don’t use Images or Audio for deep fake purposes.
AI systems presenting ‘limited risk’, such as systems that interacts with humans (i.e. chatbots),
emotion recognition systems, biometric categorisation systems, and AI systems that generate
or manipulate image, audio or video content (i.e. deepfakes), would be subject to a limited set of
3. Just for completion
LOW or MINIMAL RISK
All other AI systems presenting only low or minimal risk could be developed and used in the EU
without conforming to any additional legal obligations. However, the proposed AI act envisages the
creation of codes of conduct to encourage providers of non-high-risk AI systems to voluntarily
apply the mandatory requirements for high-risk AI systems
BlueCallom AI & Neuro Ethics
BlueCallom seeks to ensure its products are constructed and utilized with ethical guidelines and principles. We are working with our customers to train the software to enforce an ethical use of the product. The BlueCallom team is routinely studying the moral values of the societies using the product and how they can be applied in the software, despite potentially conflicting morals.
1) Scope of Ethics
All generally agreed-upon ethics, such as privacy, dignity, human rights, fairness, and so forth, shall apply to AI and Neuro Technology and its behavior in no different way than to humans. Needless to list all situations.
2) Human Intelligence Augmentation
Already in 1960, Engelbart and Licklider coined the term ‘Human Intelligent Augmentation’ with AI. Even though the technology was not there, the vision was clear. We are following that concept as one of our principles to use AI for Human Intelligent Augmentation.
3) Preventing AI Bias
While AI systems are technically not biased, they could develop a bias by learning from datasets. The result may be seen as biased and need to be prevented.
4) Inclusion with Neuro Techniques
Applying neuroscience discoveries to innovation team members may appear to be an exclusion when selecting the best talents for the job. Rather than selecting by degrees and certificates, selecting based on cognitive traits such as curiosity, courage, clairvoyance, competitiveness, creativity, collaboration, communication, and continuous is wise.
BlueCallom AI Protection
As an innovation and transformation management solution, BlueCallom.AI is not interfacing with any type of autonomous machine. Hence there is no risk of physically harming any humans or anything for that matter. However, we see a potential risk in judging employees on their performance and also their traits. We are implementing measures to either disable those features, despite the fact that the innovation outcome will be considerably lower and the competitive advantage reduced. Rather than continuing on the level, that we are all equal, yet make differences based on peoples education, we are advocating for changing that behavior towards a more natural differentiation by peoples cognitive traits. In particular because it also allows educational latecomer to pick up work momentum and develop a career based on traits and not only degrees.
AI and Neuro Ethics and Protection Adjustments and Extensions
We will learn from our users and our peers, from the market and new research. With that we must be open to adjust and extend our Ethics and Protection Guidelines. Suggestions can be made inside the BlueCallom.AI application with some instant response.
This version of our AI Ethics and Protection Guideline was released July 1, 2023
So far it was updated 0 times.