This is the talk page for discussing improvements to the History of artificial intelligence article. This is not a forum for general discussion of the article's subject. |
Article policies
|
Find sources: Google (books · news · scholar · free images · WP refs) · FENS · JSTOR · TWL |
Archives: 1, 2Auto-archiving period: 365 days ![]() |
![]() | History of artificial intelligence was one of the Engineering and technology good articles, but it has been removed from the list. There are suggestions below for improving the article to meet the good article criteria. Once these issues have been addressed, the article can be renominated. Editors may also seek a reassessment of the decision if they believe there was a mistake. | |||||||||||||||
| ||||||||||||||||
Current status: Delisted good article |
![]() | This ![]() It is of interest to the following WikiProjects: | ||||||||||||||||||||
|
I see there was a lot of discussion on this talk page about whether to preserve 'Neats vs Scruffies' or remove it.
The latest version Russell & Norvig's, AI: A Modern Approach differs from the second edition cited earlier by changing the last sentence of the footnote on P.25 of the 2nd edition and P. 24 of the 4th edition from "Whether that stability will be disrupted by a new scruffy idea is another question" to --now-- "The present emphasis on deep learning may represent a resurgence of the scruffies."
I think the new Russell & Norvig characterization there as and historical breakdown better describes that section, so I am changing the name to more closely match what they have. I'm also trying minimize disruption and flow of the article. I had planned on just dropping the part of the sentence "...and the victory of the neats" in Russell & Norvig (2003) describe this as nothing less than a "revolution" and "the victory of the neats".
But since I can see why others care about that the 'neats vs scruffies' view and possible future application, I am adding:
They had argued in their 2002 textbook that this increased rigor could be viewed plausibly as a "victory of the neats,"[1] but subsequently qualified that by saying, in their 2020 AI textbook, that "The present emphasis on deep learning may represent a resurgence of the scruffies."[2] Veritas Aeterna (talk) 23:30, 7 July 2022 (UTC)
References
((cite book))
: |edition=
has extra text (help)
((cite book))
: |edition=
has extra text (help)
Which is what we called it. Now, they're using Knowledge-based Engineering. But, it represents that each boom/(supposed) bust cycle left something of value. Knowledge-based_engineering supported one large program as it met demands of producing a new aircraft through all of the required phases. The results were so impressive that subsequent programs adapted the method into their processes as it evolved which is to be expected with computational systems. From a Lisp machine to Unix and then to the PC (all the time, multi-platformed with huge data requirements), we can trace the evolution to a domain which still exists. We need to pull together documentation about this phenomenal reality. ...
This motivated by looking at papers from a Kansas State University Conference Software-based Software Development in October of 1986 (30 years after Darmouth) that had representatives from every effort extant at the time including those who brought about KBE (see Talk page of ICAD (software) (Real example needed) for more details). I have been collecting examples of my project, Multiple Surface Join and Offset (MSJO), part of whose focus was supporting the use of free-form NURBS with the solid modeling of the time.
Anecdote? One program was to use only computational modeling but within the known constraints of the engineering processes involved. This was a huge step taken jointly with CAD/CAD/CAM systems. Computing performed. Paper modes diminished drastically. One other consequence? Known modes potentially became less stable. That is a continual concern as we improve.
One thing to discuss? What remnants carried forward through all of the summer/winter cycles? 1st. Lisp. User interfaces. 2nd. ?, Third, KBE and more.
And, what will be the one(s) from the current event? jmswtlk (talk) 15:01, 27 January 2023 (UTC)
The following discussion is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.
The talk page of this 2008 listing was tagged by SandyGeorgia as requiring a GAR; I must agree. The article has not been updated to the sufficient standard after 2010; this is especially egregious considering the massive leaps in AI over the last decade.
Thus, I'll tag it as needing an ((update))
, and nominate this for delisting as failing GA criterion 3. ~~ AirshipJungleman29 (talk) 18:50, 4 July 2023 (UTC)
This article was the subject of a Wiki Education Foundation-supported course assignment, between 24 May 2023 and 10 August 2023. Further details are available on the course page. Student editor(s): NoemieCY, Nonasus (article contribs).
— Assignment last updated by NoemieCY (talk) 10:18, 28 July 2023 (UTC)
I'm going to find a place for this elsewhere in Wikipedia. It's undue weight in this article. ---- CharlesTGillingham (talk) 23:38, 29 July 2023 (UTC)
Percy Ludgate, a clerk to a corn merchant in Dublin Ireland, independently designed a programmable mechanical computer, which he described in a work that was published in 1909.[3]
Leonardo Torres Quevedo's Essays on Automatics (1914)[4] introduced a calculating machine that used electromechanical parts which introduced the idea of floating-point arithmetic.[5] Torres is also known for having built in 1912 an autonomous machine capable of playing chess, El Ajedrecista. As opposed to The Turk and Ajeeb, El Ajedrecista (The Chessplayer) had a true integrated automation. It only played an endgame with three chess pieces, automatically moving a white king and a rook to checkmate the black king moved by a human opponent.[6]
Vannevar Bush's paper Instrumental Analysis (1936) discussed using existing IBM punch card machines to implement Babbage's design. In the same year, he started the Rapid Arithmetical Machine project to investigate the problems of constructing an electronic digital computer.[7]
References
CharlesTGillingham (talk) 23:38, 29 July 2023 (UTC)
Don't think this section was essential to the article, and I'm getting ready to add a bunch of material about 21st century. ---- CharlesTGillingham (talk) 05:15, 30 July 2023 (UTC)
In 1968, Arthur C. Clarke and Stanley Kubrick had imagined that, by the year 2001, a machine would exist with an intelligence that matched or exceeded the capability of human beings. The character they created, HAL 9000, was based on a belief shared by many leading AI researchers that such a machine would exist by the year 2001.[1][2]
In 2001, AI founder Marvin Minsky asked "So the question is why didn't we get HAL in 2001?"[3] Minsky believed that the answer is that the central problems, like commonsense reasoning, were being neglected, while most researchers pursued things like commercial applications of neural nets or genetic algorithms. John McCarthy, on the other hand, still blamed the qualification problem.[4] For Ray Kurzweil, the issue is computer power and, using Moore's Law, he predicted that machines with human-level intelligence will appear by 2029.[5] Jeff Hawkins argued that neural net research ignores the essential properties of the human cortex, preferring simple models that have been successful at solving simple problems.[6] There were many other explanations and for each there was a corresponding research program underway.
References
CharlesTGillingham (talk) 05:15, 30 July 2023 (UTC)
পিক
202.86.218.6 (talk) 23:51, 21 August 2023 (UTC)
Stkcar 118.179.0.50 (talk) 13:49, 25 August 2023 (UTC)
Under History of artificial intelligence#Milestones and Moore's law, we can find the following:
> The event was broadcast live over the internet and received over 74 million hits.
I think this is incorrect on two counts, at least according to the source cited, [1]http://www.research.ibm.com/deepblue/meet/html/d.3.shtml I think the event was broadcast over television rather than the Internet. The source also claims that "about 500 people" watched the event live on television in a basement theater, while it adds that
> The media attention given to Deep Blue resulted in more than three billion impressions around the world.
I am not sure how this translates into the number of viewers, but it is certainly distinct from the number given in the article. Fato39 (talk) 18:12, 23 September 2023 (UTC)
Paper Sst 2402:8100:2703:7279:DDDB:1A80:EC74:7A03 (talk) 12:13, 5 October 2023 (UTC)
This article was the subject of a Wiki Education Foundation-supported course assignment, between 21 August 2023 and 15 December 2023. Further details are available on the course page. Student editor(s): Ferna235 (article contribs).
— Assignment last updated by Thecanyon (talk) 05:33, 12 December 2023 (UTC)
Shouldn't E. T. A. Hoffman's stories ( The Sandman (1816) and Automata (1814) ) be mentioned? Kdammers (talk) 21:08, 30 October 2023 (UTC)
Why is it no relevant?
By 2023, generative artificial intelligence has already surpassed human intelligence in some specific areas such as the search for new proteins and strategy games.[1] 176.200.82.175 (talk) 08:33, 8 November 2023 (UTC)
References
A paper by work of various university researchers ... in very narrow fields such as protein folding or strategy games, AI has surpassed human capabilities.
This article was the subject of a Wiki Education Foundation-supported course assignment, between 6 September 2023 and 14 December 2023. Further details are available on the course page. Student editor(s): Lotsobear555 (article contribs).
— Assignment last updated by Lotsobear555 (talk) 15:38, 18 November 2023 (UTC)
modern use of AI 103.100.7.208 (talk) 14:39, 11 January 2024 (UTC)