Skip navigation

Category Archives: AI and Society

1. A web-based eBook reader and authoring tool that is designed to allow authors to embed System Dynamics models (including stock and flow diagrams, and controls that allow readers to adjust parameters, and dynamic graphs). System Dynamics is basically an approachable way for non-experts to design and interact with simulation models that are (under the hood) systems of simultaneous partial differential equations.

2. The SD models will include those using Qualitative Differential Equations. What this allows the user to do is to use simulation models that determine the implication of intervals and qualitative information about stocks and flows even if exact information is not available. For example stock A has a level above 1000, or the flow rate from A -> B is increasing.

3. The pages in the book can change depending on the state if the systems model(s).

4. Meta-models. The software will represent models in a symbolic way that will allow it to manipulate and models as data. The reader will be able to interact with the software to build a simulation model not planned in detail by the author.

Why? Quite a few of our public debates about policy are about economics, environmental, and social systems. SD provides a nice language for authors to express proposals and world views in a less ambiguous language. As a consequence it becomes easier to reason about and critique assumptions and implications.

There is also a wonderful opportunity to apply AI to this problem. SD is a relatively small language that happens to be a very powerful in formalizing a lot of policy positions. It should be quite possible to build programs that reason about SD models.

I have some 17 days of vacation time that I have to spend before the end of the year (or loose the days), so I hope to use both the vacation days and adjacent weekend days to work on this project.

My job has a habit of overflowing at times — so posting will be light for a bit. I wanted to comment on one of Matthew Yglesias’ posts:

Trama Pod

The other is that if robots and AI are really the technology of the future, then the United States seems to be aiming a perilously large proportion of our financial and intellectual resources into military applications of these technologies rather than potentially more productive ones. In Asia they have lots of robots making stuff and taking care of people, not patrolling the skies over Afghanistan dropping bombs.

It is a really good point. What complicates matters is that productivity in the US has been rising for a while — in large part, I suspect, because of applications of information technology. But at the moment we are well under using our labor capacity. My own feeling is that we need to bring the US back to full employment and develop new technologies. If we fail to restore our economy, robots may just take the place of human beings who would get far more benefit from a job.

I think Marvin Minsky points out that the US faces a demographic problem. We have an aging population and will need people to care for our elderly. Without invoking the usual nightmares, I do think AI systems can have an important role in helping care providers and recipients. But doing so requires also a decent social welfare system.

My point overall is that AI technology can be a great help, but we have to work on some of these socio-economic issues at the same time.

Like many people of my generation I grew up reading Isaac Asimov‘s robot novels. In a number of his novels he explores a set of ethical rules for robot behavior, starting and elaborating on the three laws of robotics:

  1. “A robot may not injure a human being or, through inaction, allow a human being to come to harm.”
  2. “A robot must obey orders given to it by human beings, except where such orders would conflict with the First Law.”
  3. “A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.”

The limits of these get explored and extended in his novels, for example extending the first law to apply to human society in general:

A robot must not merely act in the interests of individual humans, but of all humanity.

As technology stands today, machines will not spontaneously adhere to the three laws (darn!), but it is very possible for people developing AI software (and hardware?) to promise to build them into their systems. This could become a kind of ethical code for AI developers, similar to the Hippocratic Oath in medicine. I would like to see such a code.

In Asimov’s novels, robots end up taking over the management of global society for the good of humanity. This might be called Political Robotism. I suspect a mild version of PR is worth considering. Of course, the idea that robots can take over the world at the moment (contemporary AI programs are pretty dumb so far) can sometimes elicit very humorous negative reactions, here is one of Matthew Yglesias’s blog posts:

http://yglesias.thinkprogress.org/archives/2008/03/the_threat.php

It seems that DARPA is developing some kind of robotic attack insects despite clear indications that military robots will rebel and seek to enslave/exterminate us. The defense establishment’s continued ignorance of the basic canons of sci-fi films is genuinely appalling.

But there clearly already are some specific roles that are done though automated processes, usually in a very clumsy way. It may be worth examining how these could be improved. What we would want would include:

  • Public support and participation in the development of values that are the basis for the decisions made by software. (democracy.)
  • Protection for the rights of minorities. (civil liberties)
  • Software that is easy to understand and can be run in parallel by anyone who is interested in checking on the process. (transparency)
  • The ability for members of the public to propose changes and argue for those changes based on simulations open to the public. (adapability)

Assuming all of these are present there is a potential advantage to the use of computers. It is impossible for us to be sure of the motivations of decisions made by human beings but we can both read and test software. In other words using computers can make the rule and the operations manual a single public document that is subject to debate and critique, where human judgements may not be.

There is a risk here: every time we replace a human organization with automation we need to be sure we really understand what that organization does. There may be hidden or informal functions and activities that are not part of the formal mandate of the organization. Using software may replace those with public functions for better or worse.