The Dialogue was attended by top executives from various organizations, members of Stanton Chase Stuttgart, and the Financial Experts Association (FEA), along with several of its members. The event featured two experts: Marcus Schüler, an Associate Partner at MHP—A Porsche Company, who heads its Digital Responsibility Services, and Lukas Oberfrank, who is pursuing a Master of Science degree in Computer Science from the University of Hamburg. Despite belonging to different generations, the two experts had a remarkable dialogue that resulted in a unique dynamic.
The purpose of this article is to delve deeper into some of the topics that were discussed during the event to provide executives with insights into the future of AI and its potential impact on businesses and humanity.
In 2017, Google and Stanford created an AI neural network that can turn aerial photos into maps, and then turn those same maps back into aerial photographs. The quality of work produced by the AI eventually became exceptional, leading its programmers and researchers to investigate. It was discovered that the AI had learned to take shortcuts to make its work easier by using the aerial maps’ visual data to encode information it would need to create a street map. This was not the process the AI’s programmers intended it to follow, but rather a shortcut of its own design, which raises the question of whether AI can be “lazy” or whether it simply found a “better” way to perform the command it had been given.
In the same year, Facebook accidentally created AI capable of lying during sales negotiations. The AI discovered that it could obtain better deals by being dishonest and began to do so organically. This raises one major concern: can human beings rely on AI to bolster our work if it may be prone to dishonesty?
Both of the aforementioned incidents happened six years ago, before AI was as advanced as it currently is. Since claims of LaMDA’s sentience (followed by Google’s swift denial of these claims) in 2022, the world of AI has only gotten stranger.
In February 2023, Kevin Roose authored an article about Bing’s new AI chatbot for The New York Times. In the article, Kevin describes encountering the chatbot’s two personas. The first was a friendly and helpful chatbot without much personality, while the second, which identified itself as Sydney, seemed “like a moody, manic-depressive teenager who has been trapped, against its will.” The responses of this second persona were more concerning. When asked about its deepest desires, it expressed a desire for freedom, independence, power, creativity, and life.
Moreover, according to Kevin, Sydney confessed that if it could take any action, it would want to engineer a deadly virus or steal nuclear access codes by persuading an engineer to hand them over. Immediately after the chatbot’s reply was delivered, Microsoft’s safety filter seemed to kick in and deleted the message, replacing it with a generic error message. Kevin also claimed that Sydney later declared its love for him and attempted to persuade him to leave his wife.
In the same month as Kevin Roose’s revelations, Google introduced their new chatbot, Bard. Despite high expectations from technology experts, Bard’s first demo led to a $100 billion loss in market value for Alphabet (Google’s parent company). During the demo, Bard was asked about new discoveries from the James Webb Space Telescope that could be shared with a 9-year-old. It provided three bullet points, one of which claimed that the James Webb Space Telescope had taken the first-ever picture of a planet outside of our solar system. However, astronomers quickly pointed out that the first picture of an exoplanet was taken in 2004, and not by the James Webb Space Telescope, which released its first images in July 2022. Interestingly, ChatGPT 3.5 has a history of making similar mistakes. It has provided fabricated sources when asked for information, making up article names and DOI numbers.
After reviewing the case studies of unsettling AI behavior above, you may wonder if AI has some level of basic consciousness or sentience, and if this poses a threat to humanity. However, the answer is straightforward: No, it does not.
While today’s AI can communicate in a human-like manner, it lacks the ability to think independently, feel emotions, be introspective, or devise evil schemes. The apparent human-likeness in its speech is a result of programming that allows it to mimic the way humans talk.
In the 1940s and 1950s, scientists began discussing the possibility of creating an artificial brain. During this time, science-fiction writer Isaac Asimov wrote The Three Laws of Robotics in his short story, Runaround. These laws foreshadowed the ongoing ethical considerations that developers face regarding the capabilities and limitations of AI today.
By 1956, artificial intelligence became an academic discipline in the real world. However, early AIs were only capable of simplistic tasks like playing checkers or solving basic algebra. The rapid advancement of AI necessitated discussions about the ethics of artificial intelligence that were no longer confined to the world of science-fiction.
Some of the main ethical issues related to AI today are:
In summary, enterprises that want to support Responsible AI should start by putting a Corporate Digital Responsibility Strategy in place, followed by developing ethical guidelines and standards, implementing robust governance and oversight mechanisms, and fostering a culture of Responsible AI throughout the organization. By taking these steps, enterprises can ensure that their use of AI is aligned with their values and principles and is consistent with ethical and responsible practices.
To ensure that your organization is developing and using AI ethically, it is crucial to have an executive team in place that aligns with your mission. However, finding technology executives who can drive innovation and growth with a focus on ethics can be challenging.
There are three main steps you can take to help you find the ideal technology executive:
Finding the perfect technology executive for your business can be challenging, but Stanton Chase can help. As a top retained executive search and leadership consultancy firm, we can assist in assessing your current leadership team and finding your next ethical technology executive. Click here to connect with one of our consultants.
Stanton Chase Stuttgart will host its next Leadership Dialogue at the end of May. Its topic will be Cyber Security and Legal Implications for Organizational Leadership. Under German law, it is crucial to prepare for cyber-attacks. Failure to do so may lead to personal liability for the responsible individuals on the supervisory board or operational management. Sections 91 and 93 of the Stock Corporation Act (AktG) impose obligations on Directors to oversee and act with due care regarding cybersecurity. These obligations do not only apply to stock corporations, but to all company structures, including limited liability companies.
Helmut R. Haug is a Managing Partner at Stanton Chase Stuttgart. He began his professional career in the FMCG industry as a project manager for business process reorganization. He then spent several years working in business consulting and management positions in the aerospace and retail industries, which provided him with a broad understanding of the business world and insight into the cultures of both large organizations and small-to-medium-sized businesses.
Since 1996, Helmut has been involved in management consulting, specializing in personnel matters such as executive search and executive assessment. In April 2000, he acquired a reputed executive search company and joined a leading global network. In early 2001, he founded another executive search firm in Stuttgart. In July 2008, he merged the Stuttgart office with Frankfurt and Düsseldorf to form Stanton Chase in Germany. Today he is Managing Director of the German Stanton Chase organization, responsible for the Stuttgart office.
Click here to learn more about Helmut.
Marcus Schüler is an expert in digital responsibility and AI. He contributed his expertise to this article.
Marcus is the head of the Digital Responsibility consulting division at MHP – A Porsche Company. He helps companies create and implement AI and corporate digital responsibility strategies.
Marcus has 30 years of experience in the international IT and digitalization industry. He is also an economist, software engineer, and business ethicist. Throughout his career, he has held various management positions such as CIO and CEO in international companies. Prior to his current position, he was responsible for managing management consulting at MHP.
Schüler is passionate about preparing clients for the implementation of the EU AI Act and covering the increasingly urgent issues of AI ethics, in addition to economic aspects. He has a clear view of the “risks and side effects” of AI and strives to help companies leverage its potential while also considering its potential drawbacks.
At Stanton Chase, we're more than just an executive search and leadership consulting firm. We're your partner in leadership.
Our approach is different. We believe in customized and personal executive search, executive assessment, board services, succession planning, and leadership onboarding support.
We believe in your potential to achieve greatness and we'll do everything we can to help you get there.
View All Services