artificial intelligence

How to Create a Malevolent Artificial Intelligence

For those of you who have been following my work, it should come as no surprise that I have an ambivalent view of technology.

Technology is arguably the predominant reason that we live safer, longer, and healthier than ever before, particularly when we include medical technology – sanitation, antibiotics, vaccines – and communication technologies – satellites, the internet, and smartphones. It has immense potential, and it has been the driving force for innovation and development for centuries.

But it has a dark side. Technology, once a strong democratizing force, now drives more inequality. It allows governments and corporations to spy on citizens on a level that would make Orwell's worst nightmares look like child's play. It could lead to a collapse of the economic system as we know it, unless we find, discuss, and test new solutions.

To a certain extent, this is already happening, albeit not in a uniformly distributed fashion. If we consider a longer timeframe – perhaps a few decades – things could get far more worrisome. I think it's worth thinking and preparing sooner, rather than despair once it's too late.

Many distinguished scientists, researchers, and entrepreneurs have expressed such concerns for almost a century. On January 2015 dozens, including Stephen Hawking and Elon Musk, signed an Open Letter, calling for concrete research on how to prevent certain potential pitfalls, noting that, "artificial intelligence has the potential to eradicate disease and poverty, but researchers must not create something which cannot be controlled".

And this is exactly what Roman Yampolskiy and I explored in a paper we recently published, titled Unethical Research: How to Create a Malevolent Artificial Intelligence.

Cybersecurity research involves investigating malicious exploits as well as how to design tools to protect cyber-infrastructure. It is this information exchange between ethical hackers and security experts, which results in a well-balanced cyber-ecosystem. In the blooming domain of AI Safety Engineering, hundreds of papers have been published on different proposals geared at the creation of a safe machine, yet nothing, to our knowledge, has been published on how to design a malevolent machine.

It seemed rather odd to us that virtually all research so far had been focused preventing the accidental and unintended consequences of an AI going rogue – i.e. the paperclip scenario. While this is certainly a possibility, it's also worth considering that someone might deliberately want to create a Malevolent Artificial Intelligence (MAI). If that were the case, who would be most interested in developing it, how would it operate, and what would maximize its chances of survival and ability to strike?

Availability of such information would be of great value particularly to computer scientists, mathematicians, and others who have an interest in AI safety, and who are attempting to avoid the spontaneous emergence or the deliberate creation of a dangerous AI, which can negatively affect human activities and in the worst case cause the complete obliteration of the human species.

This includes the creation of an artificial entity that can outcompete or control humans in any domain, making humankind unnecessary, controllable, or even subject to extinction. Our paper provides some general guidelines for the creation of a malevolent artificial entity, and hints at ways to potentially prevent it, or at the very least to minimize the risk.

We focused on some theoretical yet realistic scenarios, touching on the need for an international oversight board, the risk posed by the existence of non-free software on AI research, and how the legal and economic structure of the United States provides the perfect breeding ground for the creation of a Malevolent Artificial Intelligence.

I am honored to share this paper with Roman, a friend and a distinguished scientist who published over 130 academic papers and has contributed significantly to the field.

I hope our paper will inspire more researchers and policymakers to look into these issues.

You can read the full text at: arxiv.org/abs/1605.02817: Unethical Research: How to Create a Malevolent Artificial Intelligence.

News coverage:

A response to Scaruffi's Millennium Questions

This is an attempt to respond to the 10 Millennium Questions posed by Piero Scaruffi on his last blog post. Be advised, I shall not succeed. But I shall have fun trying.

I took the liberty of creating a title for each question, to better organise them visually. I apologise in advance if by doing so I simplified the concepts to the point of inaccurately depicting them. Please refer to the full text of the question, and use the title merely as a reference.

1. What medium can we use to perceive other universes?

A particle that has no mass, the photon (i.e. light), is the medium that allows us (objects with mass) to perceive the other objects with mass that populate this universe. What kind of medium can help us perceive other universes that are based on different physical laws? A thing that obeys no physical law?
λν = c
E = hν
m=0

I suppose the reason we used light, as of now, is due to the fact that:

  1. our eyes have evolved to perceive objects through this medium, which in turn made us create mental frameworks to make sense of such perceptions
  2. thanks to Einstein's work on the photoelectric effect and subsequently Niels Bohr's research on quantum mechanics, Richard Feynman's efforts on quantum electrodynamics and many others, we have a set of theories that allowed to overlook other potential candidates for perceiving objects

We know so little about other forces that seem to interact with us in strange and mysterious ways that any attempt to explain further with our current understanding would be mere speculation.

And so I shall.

Dark Matter and Dark Energy are just placeholder names for seemingly unexplained forms of matter and energy that (apparently) poorly interact with ordinary matter, but they could really be a family of energies or media, which could follow laws that we don't know yet, or laws that don't fit with our universe. It could be that "dark energy" exists in another bubble universe next to our own, and that all we see is the shadow effect of dark energy from that universe being close to us. It could be that such energy transfers through a currently unknown medium from universe to universe, and by moving from one bubble to another it changes its properties.

Or, I could be completely wrong (most likely).

Syndicate content