The Singularity Institute for Artificial Intelligence (SIAI) is a non-profit organization with the goal of developing a theory of Friendly artificial intelligence and implementing that theory as a software system. This goal is implied by a belief that a technological singularity is likely to occur and that the outcome of such an event is heavily dependent on the structure of the first AI to exceed human-level intelligence. The organisation was founded in 2000 and has the secondary goal of facilitating a broader discussion and understanding of moral artificial intelligence.
The SIAI considers that reliably altruistic AI ultimately offers better prospects for addressing major challenges facing humanity (eg. disease, illness, poverty and hunger), than any other project seeking to advance the common good.
The executive director of SIAI is Tyler Emerson, its advocacy director is Michael Anissimov, its researchers include Eliezer Yudkowsky and Michael Wilson. The SIAI is tax exempt under Section 501(c)(3) of the United States Internal Revenue Code. In 2004 a Canadian branch was formed by Michael Roy Ames to allow Canadian donors to benefit from tax relief. The SIAI-CA is recognised as a Charitable Organization by the Canada Revenue Agency.
Contents[hide]
|
Founding
The Singularity Institute for AI was founded on July 22nd, 2000 by artificial intelligence researcher Eliezer Yudkowsky and internet entrepreneur Brian Atkins, after extended discussions on how to best approach the great risk and opportunity of smarter-than-human intelligence. Atkins and Yudkowsky met on the popular Extropy chat list, on which they had both been participants for several years prior. Here are the Articles of Incorporation and Bylaws from the original founding of SIAI, which took place in Atlanta, Georgia.
Atkins and Yudkowsky both accepted the controversial thesis that creating Artificial Intelligence with enough intelligence to improve on its own design independently (seed AI) within the next few decades was possible, given sufficent effort and resources. Prominent thinkers supporting this position are Oxford philosopher Nick Bostrom and renowned inventor Ray Kurzweil, who both joined the SIAI Advisory Board in 2004.
Early Years
In 2000, right around the founding of the Singularity Institute, two books were released that discussed the potential and near-term (before 2040) feasibility of strong AI. These were Robot: Mere Machine to Transcendent Mind by Carnegie Mellon robotics guru Hans Moravec and The Age of Spiritual Machines by Ray Kurzweil. The widespread popularity and success of both these books contributed to the early growth and support of SIAI as an organization.
Primarily existing as an online entity, SIAI's main donor base consists of transhumanists and futurists who see rogue AI as a greater threat to humanity than (biological, nuclear, nanotechnological) weapons of mass destruction, and see beneficial AI as one of the most helpful technological advancements we could reach. On June 15th, 2001, the Singularity Institute released Creating Friendly AI: The Analysis and Design of Benevolent Goal Architectures, a book-length work by Eliezer Yudkowsky on the feasibility and details of Friendly AI, concurrently with the SIAI Guidelines on Friendly AI, analogous to the Foresight Guidelines on Nanotech Safety, but for AI rather than nanotech. The response from the transhumanist and futurist communities was strong and positive. Many, even skeptics of the near-term feasibility of AI, saw Yudkowsky's work as a valuable contribution to the long-term goal of constructing benevolent goal systems in self-modifying AI. Creating Friendly AI particularly emphasized that AIs would lack any complex inbuilt tendencies aside from what was painstakingly programmed into them, including human-typical tendencies such as arrogance, reactionism, competition, empathy, philosophical contemplation, or even the very notion of an observer-centered goal system.
The Institute continued to grow steadily throughout the early 00s. Concern for the Singularity and the possibility of strong AI began to emerge more strongly among supporters of the Foresight Institute, a Palo Alto-based organization focused on the transformative impact of future technologies, particularly nanotechnology. A special interest group focused on the Singularity was held during their Spring 2001 Senior Associates Gathering in Palo Alto. On May 3rd, 2001, What is Friendly AI?, a short, SIAI-published paper discussing the topic, was published on the high-traffic futurist website KurzweilAI.net, which drew additional attention to the Singularity Institute. On May 18th, 2001, SIAI released General Intelligence and Seed AI, a short-book-length document describing a starting point for a seed AI theory.
On July 23rd, 2001, SIAI launched the open source Flare Programming Language Project, described as "annotative programming language" with features inspired by Python, Java, C++, Eiffel, Common Lisp, Scheme, Perl, Haskell, and others. The specifications were designed with the complex challenges of seed AI in mind. But the effort was quietly shelved less than a year later when the Singularity Institute's analysts determined that trying to invent a new programming language to tackle the problem of AI just reflected an ignorance of the theoretical foundations of the problem. Today the SIAI is tentatively planning to use C++ or Java when a full-scale implementation effort is launched.
The next major publication from SIAI, Levels of Organization in General Intelligence was released on April 7th, 2002. The paper was a preprint of a book chapter to be included in an upcoming compilation of general AI theories, entitled "Real AI: New Approaches to Artificial General Intelligence" (Ben Goertzel and Cassio Pennachin, eds.) Levels represents a more thoroughly developed version of the theory presented in General Intelligence and Seed AI. To date, Levels is the most highly detailed version of SIAI's AI theory available publicly.
The remainder of 2002 saw a number of milestones for SIAI. Christian Rovner joined SIAI as a full-time Volunteer Coordinator. The site design was overhauled, and several new documents were added, including "What is the Singularity" and "Why Work Towards the Singularity", SIAI's two main introductory pieces. A number of new volunteers joined SIAI, contributing valuably to communicating the message of SIAI to a wider audience. SIAI experienced its best year yet, with funding levels doubling every year since the inception of the organization.
In 2003, the Singularity Institute made yet another strong showing at the Foresight Senior Associates Gathering, with Eliezer Yudkowsky giving a well-received talk, "Foundations of Order", which discussed seed AI as a new type of order-builder, intelligence building upon intelligence, distinct from emergence or biological evolution. He referenced humans as a peculiar example of an intelligence built by evolution, transitional entities between an era dominated by evolution and an era dominated by intelligence. An edited transcript of the talk, "Why We Need Friendly AI", can be found here. SIAI also made an appearance at the Transvision 2003 conference at Yale University, organized by the World Transhumanist Association. SIAI volunteer Michael Anissimov gave the talk "Accelerating Progress and the Potential Consequences of Smarter than Human Intelligence".
Recent progress
On March 11, 2004, the Singularity Institute hired its second full-time employee, Executive Director Tyler Emerson. Prior to joining SIAI, Emerson worked with John Smart to co-organize the first Accelerating Change Conference, a yearly conference held at Stanford University and organized by the Acceleration Studies Foundation, a futurist organization that encourages dialogue on accelerating change and technology issues. On April 7, 2004, Michael Anissimov was named SIAI Advocacy Director, a part-time formal role. On July 14th, 2004, SIAI released AsimovLaws.com, a website that examined AI morality in the context of the "I, Robot" movie starring Will Smith, released just two days later. AsimovLaws.com was a success, being discussed widely and linked from the popular weblog Slashdot.
In May 2004, SIAI released "Collective Volition", an update to their theory of AI benevolence that described an extrapolation dynamic for turning perceptual data about human actions into beliefs meant to model their true preferences. On June 1st, 2004, British software engineer Michael Wilson was announced as an SIAI Associate Researcher, the second official researcher to join SIAI (besides Eliezer Yudkowsky). At the same time, SIAI released Becoming a Seed AI Programmer, a sort of job-description document meant to attract development team members. From July to October, SIAI ran a Fellowship Challenge Grant that successfully raised $35,000 over the course of 3 months. At the end of the year, in December, SIAI began releasing a newsletter, the Singularity Update, which gives a comprehensive report of SIAI activities every 3 months.
On October 14, 2004, SIAI announced the formation of their Advisory Board, with two initial members Nick Bostrom, Oxford philosopher, and Christine Peterson, Vice President of Policy for the Foresight Nanotech Institute. Later that year, Ray Kurzweil and Aubrey de Grey, prominent transhumanists, also joined the Advisory Board.
In February of 2005, the Singularity Institute relocated itself from Atlanta, Georgia to Silicon Valley. Tyler Emerson moved to Sunnyvale, CA, while Eliezer Yudkowsky moved to Santa Clara, CA. Dozens of SIAI donors and supporters live in the area, so the move conferred a strategic advantage to the organization. In August 2005, Advocacy Director Michael Anissimov moved to Santa Clara and was hired full-time to do work for SIAI.
See also
- Future studies
- Singularitarianism
- Technological singularity
- Transhumanism
External links
- Singularity Institute for Artificial Intelligence
- SIAI-CA (Canada Association)
Category: Singularitarians
No comments:
Post a Comment