Knowledge (XXG)

:Articles for deletion/Friendly artificial intelligence - Knowledge (XXG)

Source 📝

1352:
artificial intelligence page: it was one of the poorest peer reviewed publications that I have ever seen, with credible articles placed alongside others that had close to zero credibility. Also, it does not help to cite people at the Future of Humanity Institute (e.g. Nick Bostrum) as evidence of independent scientific support for the Friendly artificial intelligence idea, because the Yudkowsky organization (Machine Intelligence Research Institute) and FHI are so closely aligned that they sometimes appear to be branches of the same outfit. I think the main issue here is not whether the general concept of AI friendliness is worth having a page on, but whether the concept as it currently stands is anything more than the idiosyncratic speculations of one person and his friends. The phrase ″Friendly artificial intelligence″ is generally used to mean the particular ideas of a small group around Eliezer Yudkowsky. Is it worth having a page about it because there are pros and cons that have been discussed in the literature? To answer that question, I think it is important to note the ways in which people who disagree with the ″FAI″ idea are treated when they voice their dissent. I am one of the most vocal critics of his theory, and my experience is that whenever I do mention my reservations, Yudkowsky and/or his associates go out of their way to intervene in the discussion to make slanderous ad hominem remarks and encourage others not to engage in discussion with me. Yudkowsky commented in a recent discussion:
3014:. My emphasis is added. What I am saying is that, as an article about the fringe theory, which does not exist in a mathematical formulation anywhere, it can not stand. It thus comes down to the article being about a goal that happens to be named "Friendly AI". Unlike the neologism we are debating, "Strong AI" has been in use for decades, but was successfully debated to be redirected, and it is mentioned in the lead of the AGI article. But only a handful of authors, relative to the decades worth of "Strong AI", have taken up this nomenclature of "Friendly AI", despite it having been around for nearly a decade. And it still took a significant effort to get that redirect done if I'm not mistaken. Based on this, it is definitely undue weight when we consider the balance of other topics in this field. Simply having a few sources do not constitute unrooting the entierty of existing literature on machine ethics. To have a full stand-alone article when a subsection with a POV balanced to the rest of machine ethics discourse would suffice on that page. It is undue weight especially because ultimately the entire point of "Friendly AI" was that there is and always will be only one way to do it right. And that is stated everywhere in its materials. Not only is that assertion untrue, the burden of proof rests on them, and any editor, that would try to bring 1233:
your proximity to those sole sources you are providing that is part of the issue. It is not to say they are invalid because of this, but that it is need to know information for someone making the final decision. That has been done, and we need not discuss it further. This isn't even the primary concern of the deletion of this article. Can you actually provide any credible 3rd party sources that you didn't orchestrate or were involved with? Can you show, objectively, why this "theory" merits its own dedicated article? Also, what about the arguments that this is an impossible concept, and therefor will always be in lack of equally credible POV to dismiss it, as I mentioned above? I've asked you to prove to us that I was wrong that there exists nothing in the technical scientific literature on the theory of "Friendly AI". I know I certainly can't find it, despite reading the literature daily. This could have been solved with a quick Scholar search. But I understand you won't be doing that because it doesn't exist and can't exist due to the nature of its impossibility. So, please, do prove me wrong, and bring forth at least one or two really strong notable sources.
2333:
of the theory you so adamantly oppose. You could remove the theory from the page, or give it a proper "Theory of" heading, or clarify it as pseudoscience (there are plenty of those covered on Knowledge (XXG)). Deleting the article would be counterproductive. Because... The subject "friendly AI" is encountered as a philosophical concept so frequently out there in transhumanist circles and on the internet, that not to cover its existence as such on Knowledge (XXG) would be an obvious oversight on our part. And by "discussion" (in the field of transhumanism), I meant philosophical debate (that's what discussions in a philosophical field are). Such debate takes place in articles, in presentations and panel discussions at conferences, etc. In less than fifteen minutes of browsing, I came across multiple articles on friendly AI, a mention in a Times magazine article, an interview, and found it included in a course outline. But as a philosophical concept or design consideration, not a field of science. It was apparent there is a lot more out there. (Google reported 131,000 hits). I strongly support fixing the article.
1356:. And, contrariwise, I have just returned from a meeting of the Association for the Advancement of Artificial Intelligence, where there was a symposium on the subject of ″Implementing Selves with Safe Motivational Systems and Self-Improvement," which was mostly about safe AI motivational systems ... friendliness, in other words. I delivered a paper debunking some of the main ideas of Yudkoswky's FAI proposals, and although someone from MIRI was local to the conference venue (Stanford University) and was offered a spot on the program as invited speaker, he refused on the grounds that the symposium was of no interest (Mark Waser: personal communication). I submit that all of this is evidence that the ″Friendly artificial intelligence″ concept has no wider academic credibility, but is only the speculation of someone with no academic standing, aided and abetted by his friends and associates. If the page were to stay, it would need to be heavily edited (by someone like myself, among others) to make it objective, and my experience is that this would immediately provoke the kind of aggressive response I described above. 428:, not an article on their own. Most of the references are primary sources such as blog posts or papers self-published on MIRI's own webiste, which don't meet reliability criteria. The only source published by an indepdendent editor is the chapter written by Yudkowsky in the Global Catastrophic Risks book, which is still a primary source. The only academic source is Omohundro's paper which, although related, doesn't directly reference these issues. As far as I know, other sources meeting reliability criteria don't exist. Moreover, various passages of this article seem highly speculative and are not clearly attributed, and may be well original research. For instance: "Yudkowsky's Friendliness Theory relates, through the fields of biopsychology, that if a truly intelligent mind feels motivated to carry out some function, the result of which would violate some constraint imposed against it, then given enough time and resources, it will develop methods of defeating all such constraints (as humans have done repeatedly throughout the history of technological civilization)." Seriously, Yudkowsky can infer that using 3121:, that's not relevant as I don't accept the rejection that an article called "Friendly Artificial Intelligence", which has been stated by the creator of the theory and the goal itself, and to which 90% of the article's body text refers, is not about "Friendly AI" theory, CEV, and the "Friendliness" goal. Any reader thus far should concede this point. Hence, the original issues I've raised stand. Simple contradiction in the face of such obvious evidence doesn't follow logically. It's interesting that anyone could ignore direct wording from the author of the very concept we're debating... I'm not buying that this article is not about that which its content clearly indicates it is. So, we are at an impasse and I don't see any further reason to continue our dialectic unless new arguments are presented. I rest my position against your points as they stand. If new arguments to which we can make progress on are presented, then I'll rejoin on those talking points. But since this is already getting extremely long, I won't just engage in simple contradiction. I feel the evidence stands for 1421:, but that isn't going to help the fact that this article can not stand on its own without a significant body of notable sources. You claimed early on that there were in fact notable sources. You claimed I was incorrect that no technical/mathematical scientific paper or rigorous conjecture exists that is published from a real source, then failed to provide or substantiate that. And the reason is because such a paper does not exist in the literature. You've been asked several times to provide some sources and citations beyond the two you did. It has been explained that even withstanding those two sources, and if there were even no issue with them, that they are not enough to allow this page to stand as-is. All you or anyone else has to do, instead of ignoring well-established guidelines, is to provide some strong sources beyond the two which have been contested. And they are contested beyond the need the fact of proximity; they don't hold up even if you had been someone else suggesting them. -- 2648:(OP). Russell and Norvig briefly mention "Friendly AI" in the context of machine ethics. Chalmers briefly and critically mentions Yudkowsky's "Provably friendly AI" in one paragraph of his 59-pages long paper "The singularity: A philosophical analysis". In general, mentions of "Friendly AI" in secondary sources are rare, brief, often critical, and always appear in a broader discussion of machine ethics. Moreover, even though Yudkowsky idiosyncratically uses the term "friendliness" instead of "ethics", is approach is all about how to incorporate an ethical system inside of an artificial intelligence. It is neither about "friendliness" in the common meaning of the word (the quality of "being friends") nor about safety engineering as commonly intended. Therefore, Friendly AI is a minority view inside the field of Machine ethics. It's not notable enough for a stand-alone article. 2233:, of which doesn't get consideration of equal footing the same as POV and minor POV issues, as explicitly stated in those guidelines. Such a theory could never survive direct publication in a technical journal; this is why no one so far has been able to come up with an actual source that specifies unambiguously and rigorously what the theory of "Friendly AI" is. And the burden of proof is not on editors to keep pseudoscience, but to establish first with notable sources. All that the sources so far establish is that some people have been using the phrase "friendly AI" to refer to the act of making machine intelligence safe(er) or to discuss the theoretical implications. So, again, are we merging a neologism or merging the theory of "Friendly AI"? Neither appear to be acceptable, and for all the reasons that have been unveiled in the above comments. -- 1920:
topic as if it were the original scientific creation of Eliezer Yudkowsky. Most of the page is couched in language that implies that his 'theory' is the topic, but nowhere is there a pointer to peer-reviewed scientific papers stating any 'theory' of friendliness at all. Instead, the articles that do exist are either (a) poor quality, non-peer-reviewed and sourced by people with an activist agenda in favor of Yudkowsky, or (b) by credible people (Bostrum, Omohundro, myself and others) but few in number and NOT lending credibility to Yudkowsky's writings. That imbalance makes it difficult to imagine a satisfactory article, because it would still end up looking like a pean to Yudkowsky (on account of the sheer volume of speculation generated by him and his associates) with a little garnish of other articles around the edges.
3253:"Show the peer-reviewed mathematical proof or mathematical conjecture to counter this." - A third time, I note this is something you made up, not something with any basis in any external text, including Yudkowsky's. You simply made the leap from 'Yudkowsky is writing about something mathematics-related and used the word "theory" for something epistemic, THEREFORE Yudkowsky is claiming to have a mathematical conjecture, THEREFORE if the formalized conjecture is not provided the topic is not encyclopedic'. None of these leaps in logic has any textual basis. You really did just make them up. If you're interested in promoting encyclopedic accuracy and not spreading fabrications of any sort, you won't keep repeating this claim until you've actually found it stated in the literature. 2925:(diff). I deeply substantiated their position in their own words with those links, as I anticipated that such a thing would be contested. Again, the book you reference does not provide the mathematical theorem or mathematical conjecture of "Friendly AI" theory, which is the modus operandi of MIRI and "Friendly AI" theory. It's what it's always been about, and they explicitly eschew the "engineering side", as evidenced by their own words from articles just days ago. I don't feel that this is the place to have a technical or philosophical discussion on this topic. This is about whether this page should be deleted, and no one, not a single person, has been able to provide a source that substantiates the 3019:
be able to stand. Those sources you mention are not alone in name dropping the words "Friendly AI", and also confusing it as a goal and a theory. But nowhere do we have the credible materials we need, not even from a primary source, on the actual mathematical proof, theorem, or conjecture of the theory itself. Hence, it collapses to purely a semantic issue about whether the goal of making AI "friendly" is or is not part of machine ethics. And, by their own admission, that is the area they work in; that they are purely theoretical, mathematical, and based on logic and decision theory, meta-ethics, etc. So it rightly belongs, at best, and that is a stretch, as a minor POV in
1555:, it is sufficient that users who have professional stakes in the subject or personal relationships with people or organizations associated to it declare them. Limitingfactor declared them himself and in the case of Davidcpearce they are public domain, since he is commenting under his real name. The fact that they have these relationships doesn't automatically invalidate their votes and comments, it just means that their votes and comments should be considered while taking into account that these relatioships exist. Also, the fact that Davidcpearce suggested to add a source he was involved with doesn't automatically disqualify that source. 3144:"The author of "Friendly AI" himself has said it is both a theory and a goal by the quotes I gave." - Where did I say anything to the contrary? The point I made wasn't that there's no such thing as 'Friendly AI theory'; it was that the existence of 'X theory' doesn't establish that 'X' is non-noteworthy or otherwise unencyclopedic. Your original grounds for deleting the article were refuted in my first comment, so I don't know anymore what your new concerns are. Possibly you should re-propose deletion in a few weeks or months after you've looked at the sources I cited and had time to organize your concern a bit. 2536:, but my comments are not about a person, but a concept, so it won't work out. It's not a Straw man argument, as I'm simply presenting a source which verbatim presents what I'm saying. Claims of bad faith do not equal bad faith. We've engaged in an AfD to consider deletion of the page. Removing or blanking, or even completely starting over doesn't resolve the issues that have been raised. Editing the article during the AfD doesn't change that we are in an AfD. The objective is to come to consensus, and that can not be achieved through editing the article. I am also not the only editor that has requested 1947:. That discourse on a philosophical topic does not actually mean that it makes sense or is substantial or real in any meaningful way. So far, all the sources that can be found are merely this kind of discourse. There has never been an actual technical mathematical or logical proof or rigorous conjecture published anywhere on the idea itself, only vague language and speculation. This supports the remarks echoed by LimitingFactor and the anonymous editor(s) above as well, ultimately showing that making a quality Knowledge (XXG) article on this topic would be a feat as impossible as the topic itself. -- 3697:
distinction is fuzzy to begin with, it's probably a mix of the two. Secondary and tertiary sources are both good for citation; secondary sources are preferred for more detailed presentations, tertiary sources are good for broad overviews. When the tertiary source is as widely cited and respected as R/N, it's also useful for locating the topic in its academic context and establishing notability. Primary sources too are fine for citing encyclopedically, as long as it's to fact-check a secondary source or cite an isolated claim, not to synthesize multiple claims in a novel way.
2297:
without considering these issues or looking at the (lack of) evidence to support the existence of this original research on Knowledge (XXG). I believe strongly at this point that someone needs to at least start moving this forward by providing strong sources that substantiate this theory. But, as mentioned before, those references do not exist. Had we been having this discussion while this article was a stub it would have been a candidate for speedy deletion, but it has embedded itself and slipped unnoticed for years because of its (non-)status in the field. --
886:, especially in light of the arguments made against it above. You could replace "Friendly AI" with any pseudoscientific theory and I would (and have, in the past) respond the same. This is a significantly weak minimal POV that can scarcely stand on its own outside of this encyclopedia. Yet, somehow, it has spread into many articles and sideboxes on Knowledge (XXG) as if a "de facto" part of machine ethics! That no one has taken issue with it until now is that it has simply been ignored. Lastly, I would point out that if your primary concern was 2082:
dangerous to humans by default, unless its design was provably safe in a mathematical sense". This view has been commented on and criticized by independent academics such as David Pearce and Richard Loosemore, among others, and therefore probably passes notability criteria, but most of the content of this article is unencyclopedic essay-like/poorly sourced/promotional content, and if you were to remove it, very little content would remain, and I doubt that the article could be expanded with high-quality notable content. Therefore, '
3250:. So, again, I have to note that your arguments are just not relevant to the issue of deletion. If you think Yudkowsky is a pseudoscientist, go find reputable sources saying as much, and help make WP's coverage of the topic comprehensive and useful. Deleting every topic you think is pseudoscientific isn't how WP works; WP reports on demarcation controversies in the sciences, but it does not try to adjudicate them all. Nor does it try to use its inclusion criteria to bludgeon noteworthy views it dislikes out of memetic existence. 2425:), so that it can be properly differentiated from the general concept later (when someone is willing to include citations). The article is much more generic now. By the way, since the "F"riendly material was presented out of context, almost indistinguishable from the primary subject of the article, I've opted not to copy it to the talk page. It needs to be rewritten in context, if at all. I've got to go for awhile, and have left the "Coherent Extrapolated Volition" section for last, but feel free to pick up where I left off. 549:. That Springer published an anthology of essays does not substantiate the mathematical or logical theories behind Friendly AI theory. In fact, this will never occur, as it is mathematically impossible to do what the theory suggests and intractible in practice, even if it were. That this wasn't caught by the referees calls into question the validity of that source. Strong evidence can be brought here to counter the theory, and it would end up spilling over into the majority of the contents of the article as to why it is 736:(OP) The Springer pubblication is paywalled, I can only access the first page where Yudkowsky discusses examples of anthropomorphization in science fiction. Does the paper substantially supports the points in this article? Even if it does, it is still a primary source. If I understand correctly, even though Springer is generally an academic publisher, this volume is part of the special series "The Frontiers Collection", which is aimed at non-technical audiences. Hence I wouldn't consider it an academic pubblication. 2958:
understanding, designing, etc. Friendly AIs. I confess I don't understand what your concern is; the existence of the word 'theory' does not make a topic non-notable. (I'm also not clear on what you think the content of 'Friendly AI theory' is supposed to be; if it's a fringe belief, what belief, exactly, is it?) Certainly there are claims being made here, and hypotheses and predictions put forward; Knowledge (XXG) should report on those claims, citing both noteworthy endorsements and noteworthy criticisms of them.
2077:(OP) The "Singularity Rising" book by James Miller probably shouldn't be considered as an independent source, as the author has professional ties with MIRI: he is listed as a "research advisor" on MIRI's website and as you can see on the Amazon page, the book is endorsed by MIRI's director Luke Muehlhauser, MIRI's main donor and advisor Peter Thiel, and advisor Aubrey de Grey. The very chapter you cited directly pleas for donations to MIRI! The other sources look valid, however. I agree that the general topic of 2723:
but the notable and reliable material available is very scarce. If you where to combine all the reliable secondary source you would get perhaps two or three paragraphs worth of content. Is that enough to deserve a Knowledge (XXG) article? How does that compare with other views in machine ethics that may be even more notable among experts but don't happen to be backed by a large-ish community like LessWrong? Does having a stand-alone article for Friendly AI give it a fair representation, or undue weight?
2380:, viz. benevolent. These are two very, very distinct concepts which have been laminated together and are trying to be used here to edge in an unsubstantiated theory. Again, there are no notable, credible, independent 3rd party sources on the "theory" of "Friendly AI", and this has been stated over, and over again now. As for wanting or desiring or wishing there was a canonical place to discuss "friendliness" of AI, this is not it unless it can be backed by significant quality sources. As it stands, 1475:
organize them. But what does matter is that you are clearly canvasing at this point. The points to be made have been made. It has been requested that someone — anyone — please provide credible sources other than these. Let us end this futile discussion on whether or not you are for or against whatever topics. It has never been the issue, only that it is important to know that you are pumping the source because you contributed to it and helped orchestrate it. For or against it, that is still
3316:, of which I'm not biting. Attempting to paint my position as being against a person or persons is also not going to help your case, as I am and always have been on policy; thankfully, I've been always civil and my comments reflect that here. I understand this must be frustrating, but, again, I would appreciate if we addressed ideas and arguments and not each other. I wonder who to contact when an administrator is doing this? I'll have to look into it. It is really unfortunate to see. -- 3062:." - Neither of those is a well-defined term. By 'theory' you might mean a body of knowledge, a body of beliefs, a field of inquiry, a scientific theory, a scientific hypothesis, etc. By 'goal' you might mean a state of affairs that's desired, or the desire itself, or some concrete object involved in the desired state of affairs. Yet you seem to want to get a lot of work done using these amorphous terms, in spite of the fact that the article we're discussing is 3238:
logic) or the policies and community norms on WP. You seem to be an experienced editor, yet you don't seem to see the obvious problem with deleting all articles that are about 'theories'. No one has claimed that Yudkowsky's view is the mainstream, establishment AI view. But it's acknowledged and engaged with and taken seriously by at least some of the biggest names in mainstream AI, so the topic is encyclopedic, by ordinary Knowledge (XXG) standards.
2146:
physics or a new kind of communications theory, and we were going to cover that, we would at least need a strong source that fully details that concept. It would be fair enough to provide a criticism section under machine ethics that simply addresses the concerns of making AI benevolent instead of trying to force everyone into this lexicon, which is not only not widely supported but is becoming increasingly confused with the two points above. --
1656:. A more apt title for the article would be something like "Yudkowskian Machine ethics" or "Eliezer Yudkowsky's school of Machine ethics", but the point is that these views are not notable enough to warrant a stand-alone Knowledge (XXG) article. This is evidenced by the fact that the only available sources are primary sources written by Yudkowsky and his associates, and most of them are non-academic and in fact even self-published sources. 2678:'checks and balances' and 'safeguards', but never mentions 'morality' or 'ethics' or the like in the discussion of Yudkowsky specifically. Morality (and therefore machine ethics) is very relevant, but it's not the whole topic (or even the primary one, according to Yudkowsky). According to Yudkowsky, building a Friendly AI is primarily about designing a system with "stable, specifiable goals", independent of whether those goals are moral. 3700:"we're talking about only a few sources brought forward since AfD started" - Yes, that's normal in notability AfDs. People who think the topic is noteworthy throw some quick references into the pot, and we reassess. AfDs are short, so in most cases the entire job of adding new sources isn't finished during one, but if in such a short span of time we find a lot of really high-quality references (as in this AfD), that's very promising. - 1182:. What is being stated is that there is significant proximity to the sole ensemble of resources for which you are providing to defend the notability of the article as a stand-alone topic. There are more links available if desired, but I think this shows that this isn't conjecture on my part. By the way, still waiting on that scientific journal article on the theory of "Friendly AI" that you said was not factual on my part. -- 3154:"I don't accept the rejection that an article called "Friendly Artificial Intelligence", which has been stated by the creator of the theory and the goal itself, and to which 90% of the article's body text refers, is not about "Friendly AI" theory, CEV, and the "Friendliness" goal" - The article as it's currently written is about Friendly AIs, not about those things, which would be the central topic of articles called 1031:. I believe this clearly violates the spirit of these guidelines, and that knowledge of this asymmetry has been used as an opportunity to present this "theory" as something stronger than it actually is. That this isn't just a matter of debate, but something so incredulous that it has been nearly totally ignored by the mainstream scientific community. That should be a strong indicator of the status of this "theory". -- 78: 432:? Biopsychology is defined in its own article as "the application of the principles of biology (in particular neurobiology), to the study of physiological, genetic, and developmental mechanisms of behavior in human and non-human animals. It typically investigates at the level of nerves, neurotransmitters, brain circuitry and the basic biological processes that underlie normal and abnormal behavior." 2692:
are "often critical", I agree. Yudkowsky's views are very clearly not in the mainstream, and it's important that this article be improved by including both a fuller discussion of what those views are, and a fuller presentations of published objections and alternative views. WP can handle controversial topics fine, as long as they're noteworthy enough to leave a paper trail through the literature. -
1007:". This is a concept that is mentioned in the perpetual motion article as well. The problem with having a stand-alone article on this fallacious topic is that it shifts the burden of proof onto editors to compile a criticism section for something that is so wantonly false that it is unlikely to be formally taken up. That is to say, disproving this is simple enough that one can point to the 1601:
source, it is not sufficient for a stand-alone article on an impossible topic. It has already been repeated that it is not sufficient that he is proximal to it to invalidate it alone, but that is valuable need to know information. This was all stated over and over again. Reading the full discourse is helpful to prevent this kind of circular argumentation. Again, let us stop this.
3076:"It is undue weight especially because ultimately the entire point of "Friendly AI" was that there is and always will be only one way to do it right." - That way being...? A topic can be noteworthy even if some people have normative beliefs about the topic. E.g., 'Marxism was proposed as the right way to organize society' is not a very good reason for deleting the article 553:. Should every Knowledge (XXG) page become an open critique on fringe and pseudoscientific theories? I would hope not. Further, to substantiate a stand-alone article, this topic will need to have several high quality primary sources. Even if we somehow allow these issues I've raised to pass, that final concern should be sufficient to recommend deletion alone. -- 3087:"nowhere do we have the credible materials we need, not even from a primary source, on the actual mathematical proof, theorem, or conjecture of the theory itself" - There is no such 'mathematical proof, theorem, or conjecture'. You confabulated it yourself. So it's not super surprising that you can't find the thing no one ever claimed existed..? 1261:
any literature - popular or academic - favourable to MIRI / Friendly AI. My only comments on Friendly AI have been entirely critical. So it's surreal to be accused of bias in its favour. If the Knowledge (XXG) Flat Earth Society entry were nominated for deletion, I'd vote a "Strong Keep" too. This isn't because I'm a closet Flat Earther.--
2292:
Volition" is part of the issue with neologisms, and why they are usually weeded out on this encyclopedia. The desire to have ethical machines is distinct, more general, and has been in existence, long before "Friendly AI" theory came onto the scene. If we want to have a topic about making machines ethical there is already an
1885:
So far, no one has provided any significant citation or reference, and all that is being done is an attempt to spin or frame my responses and informational annotations about all relevant facts as ad hominem, which is in bad taste. I've already repeatedly asked that we drop this informational line of discourse on the
537:, as you, David, were involved with the organization, publication, and execution of that source. And you were also a contributing author beyond this. Any administrator considering this page's contents should be made aware of that fact. Now, moving back to the main points: firstly, Friendly AI as a theory is 2048:. Further, the books you linked are citing non-notable sources for the materials on "Friendly AI" theory and are only covering the topic in 2-3 pages maximum at minimal depth. Oppose Keep on those grounds. As for a merge, I oppose that based on the argumentation that it isn't clear that "Friendly AI" as a 2780:"theory" of "Friendly AI" as opposed to the neologism "friendly" being passed off as colloquialism for "benevolent"? Lastly, the original theory and all of the related original work, as non-notable as it is, has been regarding the philosophical, ethical, and theoretical implications of the ability for it 3264:
that any human being could possibly disagree with you without being evil or deceptive in some fashion, that probably says more about the limits of your imagination than about the limits of human error. Suffice it to say that I disagree you've provided much evidence (or even, at this point, a coherent
2722:
route to argue that Friendly AI is not in Machine ethics. Russell and Norvig discuss it in a section which has "Ethics" in its title, and somehow it is not about ethics because they didn't drop the word in the specific paragraph? Come on! As for notability, I agree that the topic has some notability,
1884:
notice was informational/supplimental in content. And that all arguments are as it pertains to the quality of sources. Again, and this has now been repeated many times, it is not about whether or not someone is for or against the topic, but to root out the true quality of these sources and citations.
1260:
Lighthound, any Knowledge (XXG) contributor is perfectly entitled to a use a pseudonym - or indeed an anonymous IP address, as did the originator of the proposal for deletion. Where a pseudonym becomes problematic is when it's used to attack the integrity of those who don't. I have not "orchestrated"
1085:
is that you are affiliated or involved in some non-trivial way with the contributors or sources or topic of concern, which is completely distinct from a Wikipedian who is absolutely putting the interest of this community first. And, in the interest of this community it should be a non-issue that this
476:
on the topic of Friendly AI. And it is not sufficient to pass notability by proxy; using a notable source that references non-notable sources, such as Friendly AI web publications, would invalidate such a reference immediately. Fourth, even if we were to accept such a stand-alone article, it would be
3090:
I already refuted the claim that Friendly AI theory is a subdiscipline of machine ethics. My claim wasn't 'it's an engineering topic, therefore it's not machine ethics' (which is a non sequitur, false, and has been asserted by no one). Rather, my claim was 'making an agent moral isn't the same thing
3018:
here as a stand-alone topic. This isn't a complex issue, but it is obfuscated due to the naming. We all must accept the facts that evidence has shown that it is both a theory and a goal, often at the same time, especially from its adovcates. But if we are going to write an article on this, it has to
2332:
theory. It is not the name of a theory. It's the article title we're talking about here, and whether it warrants a place on Knowledge (XXG). Problems with the content should be worked out on the article's talk page. The term and subject "friendly AI" exists as a philosophical concept independently
2052:
can be separated cleanly from this loose concept of the "theory" of "Friendly AI", which indeed has no credible sources which detail the subject matter. That is to say, people are saying that AI should be "friendly" and confusing or not seeing that there was indeed a speculative, non-rigorous fringe
1993: 1988: 1889:
issue. So, you can remain troubled, but there is no issue other than the quality of the sources. To which it presently stands that there are none, and all that has been brought forth is not even substantive of the subject matter. All of this leads to the fact that this is an article long overdue for
1792:
I don;t care about who wrote the article or created the terminology. I think its a reasonable topic, and not really covered in detail in any other existing article. Further, I think it's likely to be expandable. There are sufficient secondary sources from other than the devisor of the term. What the
2972:
of 'Friendly AI' theory." - Er, no? Russell and Norvig has been cited. Go look at a copy of the text. If an AI topic gets cited in Russell and Norvig, that's the end of the discussion as far as notability goes. The only question now is how best to organize the content on WP, not whether the content
2843:
See section 26.3, 'The Ethics and Risks of Developing Artificial Intelligence'. Friendly AI is also discussed in the book's introduction (p. 27), in its general discussion of human-level-or-greater artificial intelligence; 'Friendly AI' is one of the main terms it highlights as important for anyone
2691:
The fact that this topic gets discussed in the world's leading AI textbook at all establishes notability; it's fine if the discussion within that textbook is "rare" and "brief", since the textbook's breadth makes it remarkable that the topic is raised at all. As for whether discussions of this idea
2506:
You seem to think you will win the argument by ignoring the other side of the debate. But it doesn't work that way. The "Friendly AI theory" and "Friendliness" have mostly been removed from the article. So now the article for the most part deals with the common term "friendly AI". Continuing to
2291:
about a philosophical or speculative issue through talk pages. If we followed this suggestion, the entire article would have to be moved to the talk page, at which point it would simply become a forum. That you didn't know that "Friendly AI" is part of the "theory" along with "Coherent Extrapolated
2081:
is notable, and Yudkowsky's "Friendly AI" is probably notable enough to deserve a mention in that page, but a stand-alone article gives it undue weight, since it is a minority view in the already niche field of machine ethics. In my understanding "Friendly AI" boils down to "Any advanced AI will be
1843:
I am troubled by the inflammatory allegations being made in this discussion (by Lightbound). First, I am not a meatpuppet or sockpuppet, nor did David Pearce contact me in any way, directly or indirectly, about this discussion. I have long had an interest in this page because it is in my field of
1374:
David Pearce, you are indeed a respected critic of FAI, so I would not attack your position just because you were also involved with the Singularity Hypotheses book. My reasons for disagreement have only to do with the wider acceptance of the idea and the maturity of those who aggressively promote
1232:
I am giving your comments deep consideration and have not missed your points. Again, it isn't about pro/con. I don't make the decision on the deletion, but others should know your involvement. The issue is that you originally raised two sources to defend this article as stand-alone, but it is about
934:
further comment to keep this focused. Still waiting on that burden of proof that there is a scientific paper that entails "Friendly AI" theory. I'm not sure there is much more that anyone can really say at this point, as, unless new sources are brought forward this seems to devolve into trilemma. --
928:
David, in the interest of keeping this on topic, I'm not going to fill this comment section with all the links that would show your affiliations with many of the authors of the Springer anthology source you mentioned, and the author of the "Friendly AI" theory. Anyone who wishes to do that can find
3549:
The nature of the Russell/Norvig citation mostly renders your concerns moot. If the only citation for a topic at a given time is Britannica, it's probably noteworthy, because Britannica filters strongly for noteworthiness. Similarly for a highly esteemed introductory biology textbook and a biology
3237:
And, as a theory, it can not stand on its own" - I can't tell whether you just aren't expressing yourself clearly, or whether you don't understand the varied ways the word 'theory' is used (or, e.g., that 'theory' is not the same thing as 'mathematical theorem', even in the context of mathematical
2145:
issues. The problems will remain: finding sources that do not merely discuss (and confuse) the two above issues, and finding sources that actually give a technically sound, rigorous, peer-reviewed proof or mathematical conjecture for the topic. That is, if someone is going to promote a new kind of
2677:
OP, I disagree with your characterizations. The relevant section of Russell and Norvig is called 'The Ethics and Risks of Developing Artificial Intelligence', which obviously includes machine ethics but is a broader topic than that. Russell and Norvig's description repeatedly mentions things like
1853:
Fourth, The conflict of interest issue is a red herring. I do not stand to gain by the deletion, and I exposed my involvement in the community of intellectual discourse related to the issue here straight away. ::: It would help matters if the discussion from here forward did not contain any more
1600:
to support his arguments. This particular argument was that he was somehow for or against this topic, which has been pointed out repeatedly to be irrelevant and not the issue. The real issue, which I keep trying to steer us towards, is that even if we accept this anthology of essays as a credible
1474:
Administrators have been contacted. This is out of hand. Again, it isn't the primary issue whether or not you are polemical or not; for the topic or against the topic; pro or con; love it or hate it. The sources are contested here and are invalid, regardless of the fact that you helped create and
698:
Lightbound, if I have a declaration of interest to make, it's that I'm highly critical of MIRIs concept of "Friendly AI" - and likewise of both Kurzweilian and MIRI's conception of a "Technological Singularity". Given my views, I didn't expect to be invited to contribute to the Springer volume; I
3624:
and "significant independent sources". To be clear: we're talking about only a few sources brought forward since AfD started, and that's all that has been brought forth when we exclude the Springer volume. And as for that Springer volume, which has been refuted by many editors above, two points:
2384:
should be the place for the general overview of this field and the goals it shares. Anyone reading this so far should see clearly this distinction. This is intentionally obfuscated for a reason and it is part of why this is so difficult to separate out, unpack, and discuss. Please try to see the
2296:
for that. If we want to talk about the pseudoscientific, non-credible, non-independently sourced fringe theory that is "Friendly AI", which is what this page is about, then that is another issue. I am repeating all of this; because, people are coming in and expressing an emotional appeal or vote
1919:
Stepping back from the fray... I think the deletion proposal is not an easy one to decide, because the topic itself (the friendliness or hostility of a future artificial intelligence) is without doubt a topic of interest and research. I voted to delete because the page, as it stands, treats the
3307:
Ignoring the evidence and arguments brought up is not consensus. We're in disagreement and that's OK, as I've stated above. Insulting or attempting to attribute statements I've not made is not going to help reach consensus. I would appreciate if we kept the discussion on the arguments, as it is
1322:
Lighthound, a willingness to engage in critical examination does not indicate favourable bias - any more than your own critique above. We both disagree with "Friendly AI"; the difference is that you believe its Knowledge (XXG) entry should be deleted, whereas I think it should be strengthened -
933:
that you share a close relationship with the source material, topic, and reference(s) you are trying to bring forward. This is irrespective of your intentions outside this context. And note that this is supplimental information and is not neccessary to defend the case for deletion. I digress on
2874:
seems in tension with that claim: Muehlhauser repeatedly suggests that it's misleading to describe the AIs Yudkowsky/MIRI worry about in human terms, and that terms like 'decision' and 'goal' are mostly useful shorthands rather than anything philosophically deep. He suggests thinking of AIs as
1351:
I am Richard Loosemore, and I am also a contributor to the recent Springer volume ("Singularity Hypotheses: A Scientific and Philosophical Assessment: Amnon H. Eden (Editor), James H Moor (Editor), Johnny H Soraker (Editor)). That book is not sufficient justification for keeping the Friendly
3696:
Introductory books are generally a mix of secondary and tertiary material; large portions of Russell and Norvig are I think secondary, because the field of AI itself is relatively fast-changing and new. I don't know whether to classify the Friendly AI stuff as secondary or tertiary; since the
2957:
Probably my fault via an edit conflict as you were revising your comment. To respond to your added points: The article seems to mainly be about the hypothetical agent called 'Friendly AI', not about 'Friendly AI theory', the research initiative or AGI subdiscipline concerned with forecasting,
2112:
and wouldn't even stand on its own even in that context. A source merely mentioning it, referencing a non-notable primary source is still not actually telling us what this "theory" is in any concrete way; they are simply documenting an apparent controversy in an idea of whether or not machine
2000:
The Omohundro paper is a RS independent of Yudkowsky, but looks more like primary research than a secondary review of FAI. The four sources above are in depth about FAI, and seem independent. The nexus book is from Springer and presumed reliable. The singularity book is from BenBella Books, a
323:, this article mostly deals with this "Friendliness theory" or "Frendly AI theory" or "Coherent Extrapolated Volition" which are neologisms that refer to concepts put forward by Yudkowsky and his institute, which didn't receive significant recognition in academic or otherwise notable sources. 2623:, because 'build a safe smarter-than-human AI' is a much broader topic than 'build a moral smarter-than-human AI'. Software safety engineering, even for autonomous agents, is mostly not about resolving dilemmas in applied or theoretical ethics. (And 'Machine ethics' can't be merged into 665:
is still taken seriously as an academic publication under Springer, and it's certainly peer-reviewed. But you're also changing the topic. How about just answering my question? Then we can move on to other topics at our leisure. What is the 'mathematical impossibility' you have in mind?
2420:
Well, the title and the lead section indicate a concept, not a formal theory. Though Yudkowski and his capitalized "Friendly AI" and "Friendliness" were interwoven throughout the entire article. I've started to revamp the article, and have begun extricating the edged-in "theory" (per
890:, the article could have reflected that before it was nominated for deletion, as it has been in place for years, and you have ties with its author and those interested in its theme. Again, sharing a close connection with the topic and or authors should be noted by administrators. -- 460:. That it has lasted this long on the Knowledge (XXG) is evidence of the lack of interest to researchers who would have otherwise recognized this and nominated deletion sooner. As we all know, Knowledge (XXG) is not a place for original research. Second, even if you manage to find 352:
The IJ Good / MIRI conception of posthuman superintelligence needs to be critiqued not deleted. The (alleged) prospect of an Intelligence Explosion and nonfriendly singleton AGI has generated much controversy, both on the Net and elsewhere (e.g. the recent Springer Singularities
2929:
of "Friendly AI" theory. Attempting to segue into another category based on the current wording and narrative coming out of MIRI isn't going to help substantiate this concept or this page, as that has never been and, by their admission, is not what "Friendly AI" theory is about.
2116:
Distinguishing criticisms about whether or not AI can be made or to stay benevolent, which is more general than and not specific to the "Friendly AI" theory. This, doubtless, was part of the idea behind naming this theory in such a way. This is the issue with it being
790:
of "Friendly AI". And there is a very logical reason why there is not, and it is related to why it was published in an anthology. "Friendly AI" can not survive the peer-review process of a technical journal. To do so, such a paper would need to come in the form of a
3586:
is not relevant here, as your claim 'this is not a noteworthy article' is what's under dispute. Adding references to establish notability is precisely what's called for in notability AfDs, and it doesn't make sense to dismiss scholarly sources on the grounds that
436:
Anon, like you, I disagree with the MIRI conception of AGI and the threat it poses. But if we think the academic references need beefing up, perhaps add the Springer volume - or Nick Bostrom's new book "Superintelligence: Paths, Dangers, Strategies" (2014)?
1068:
is not an ad hominem; it is a fact that you contributed to the Springer source, and it is a verifiable fact through simple Google queries that you know the author(s) involved in the article. This is important for judgement in looking at the big picture of
3287:
I believe a strong case has been made for keeping the article. Now, the main thing concerning me is the length of this discussion compared to the length of the article. We should get back to building the encyclopedia, by working on the article itself.
2483:"theory". And there is no way we can credibly, reliably source such a distinction between "Friendly AI" as a "benevolence" colloquialism from the more general discipline of machine ethics. This was also pointed out in my comment below in response to 2009:. A marginally notable topic and surmountable article problems suggest keeping the article. Even if others don't find it notable, basic facts about FAI ideas (it exists, when it was coined, a short summary) are verifiable in reliable sources. Per 3200:. So, that, plus the above arguments, should slam the door on that issue. This has always been both a theory and a goal. And, as a theory, it can not stand on its own, as I've repeated now many times. Show the peer-reviewed mathematical proof or 2796:, which I am linking again for posterity. "Friendly AI" theory has never been about "software safey". By their organization's own admission, it's been purely a research and theoretical issue and not an engineering one. Here is a quote from the 2806:"If we can reformulate the important philosophical problems related to intelligence, identity, and value into precise enough math that it can be wrong or not, then I think we can build models that will be able to be successfully built on, and 588:, you wrote "it is mathematically impossible to do what the theory suggests and intractible in practice". What, specifically, are you claiming is 'mathematically impossible', and how do you know this? On what basis are you confident in your 1077:. Thankfully, someone was able to bring this information to light so that it could at least be known. What to be done about it is up to administrators. My only purpose in pointing out a fact was to provide the whole truth. I do not have a 3389:. If that were the case, those who do line-editing would also seem biased on complex topics requiring extended discussion. I've been responding to people in good faith and on point, and with new materials and evidence. I hear you, and 3166:. Obviously all of those topics are extremely relevant to the 'Friendly AI' page, but it's a fallacy of equivocation to conflate 'X is about Y' in the sense of 'X is in some way relevant to Y' with 'X is about Y' in the sense of 'Y is 1027:; while the world waits for an academic to draft a formal refutation of an informally stated concept that hasn't even been put forward as a stand-alone mathematical conjecture, the article would remain here on the Knowledge (XXG) as 283: 842:
Lightbound, your criticisms of Friendly AI are probably best addressed by one of its advocates, not me! All I was doing was pointing out that your original claim - although made I'm sure in good faith - was not factually correct.
3204:
to counter this. See above for many pieces of evidence that the author claims it as a theory as well. So much evidence at this point I can't see any reasonable editor continuing to contradict it in good faith. Strongly recommend
2856: 3729: 3619:
at this point. The Norvig source is a tertiary source by definition; an introduction/survey/handbook to the whole of the field of AI. Even if interpreted as secondary by some stretch, one or two sources do not constitute
2271:
and removed. I suggest moving any challenged material to the article's talk page, where it can be stored and accessed for the purpose of finding supporting citations. The article needs some TLC, and is worth saving.
2854:. Whether it's my "opinion" depends on whether it's grounded in fact, since not all beliefs or judgments are mere 'opinions'. Beware of polemical framings. See e.g. the contents of Anderson and Anderson's anthology on 3070:. A Friendly AI is a kind of agent, not a kind of theory; and the fact that there is a thing (or things) called 'X theory' that are associated with X, doesn't tell us anything directly about the nature of X itself. 318:
Since the subject appears to be non-notable and/or original research, I propose to delete the article. Although the general issue of constraing AIs to prevent dangerous behaviors is notable, and is the subject of
1375:
it. Your presence in the book and my presence in the book are clearly not the issue, since it is now clear that we take opposite positions on the deletion question. So perhaps that argument can be put aside.
1518:
of "Friendly AI", not a partisan. I neither contributed to the entry nor helped "orchestrate" it. If you've seriously any doubts on that score, why don't you drill through the history of the article's edits?
1412:
means and why their close relationship to the people and processes behind the sources they promote would need to be a consideration. Your close proximity to the source(s) are sufficient. You can continue to
909:
Lighthound, what are these mysterious "ties" of which you speak? Yes, I have criticized in print MIRI's conception of Friendly AI; but this is not grounds for suggesting I might be biased in its favour (!).
1121:
Lighthound, I was invited to contribute to Springer volume as a critic, not an advocate, of the MIRI perspective. So to use this as evidence of bias in their favour takes ingenuity worthy of a better cause.
3241:
You seem to want to delete it because you dislike Yudkowsky's views; but 'Yudkowsky's views are false' is not grounds for deletion, any more than 'Yudkowsky's views are a theory' is. I noted already that
1850:
Third, you do not seem to have noticed, Lightbound, that when I entered the discussion I voted against David Pearce! I therefore makes no sense to claim that I was canvassed into the discussion by him.
1214:
Lighthound, forgive me, but you're missing my point. I'm a critic of the MIRI conception of an Intelligence Explosion and Friendly AI. Many of the contributors to the Springer volume are critical too.
1448:
Aghh, Lighthound, please re-read. I am a critic of "Friendly AI"! I would like to see a balanced and authoritative Knowledge (XXG) entry on the topic by someone less critical than me - not polemics. --
3633:
because of the orchestration evidenced, but is further weakened that authors/individuals closely related to the author of "Friendly AI" are part of the volume. Evidence of that relation can be shown
799:. As pointed out above, the book is oriented towards a non-technical audience. Again, even if we let this source pass (which we shouldn't), this is not sufficient in quality or quantity to warrant a 3550:
topic; and, here, for a highly esteemed AI textbook and an AI topic. You also don't seem to have noticed that I added two independent scholarly references to the lead, not one; so your citations of
2770:
Where has it been rigourously defined that there is a categorical exclusion in the type of automation or its level of complexity or "intelligence" that makes it outside the domain of machine ethics?
3091:
as making it safe, and Friendly AI theory (the research project / subfield) is mainly about making it safe'. Obviously the two aren't unrelated, but they aren't in a subset relationship either. -
1816:
That's clearly untrue. There are no notable, credible secondary/tertiary sources on the theory of "Friendly AI". Prove us wrong by linking them! It can't be done; because, they don't exist. --
878:
entails the "Friendly AI" theory and I'll gladly concede; however, if you cite the anthology from Springer, then it has its own issues, though largely moot as one source is not enough for a
3113:
The author of "Friendly AI" himself has said it is both a theory and a goal by the quotes I gave. We can agree to disagree on this, the direct evidence is in my corner on that. As for your
1756:
and the cross-promotion of their member's books and articles. This does not represent a strong, notable secondary/tertiary source. There needs to be something more. Further, the article is
3677:
series and not part of the technical journals. This was pointed out above by other editors as well, which I already diffed. I'm not going to reply further on this line of argumentation. --
3269:
into which evidence could be fit). Yet I'm pretty sure I'm not an evil mutant troll who hates Knowledge (XXG) and puppies. :) So, maybe dial the theatrics back, at least a notch or two? -
1752:. The other author is from The Future of Humanity Insititute. It is a verifiable fact that these organizations are aligned and in public cooperation with each other as evidenced by their 277: 464:
sources, this does not substantiate an article for it when it can and should be referenced in the biography for the author. Frankly, that is a stretch itself, given that it doesn't pass
2864:" - Minimally: Less than 50% of machine ethics is about the ethics of superintelligent agents. "regarding the philosophical, ethical, and theoretical implications of the ability for it 2517:(uncapitalized) exists as a philosophical concept. The term appears to get more use than the term "machine ethics". Note that "machine ethics" and "friendly AI" are not synonymous. 374: 2229:
The nomination for deletion isn't just that this doesn't stand on its own. It's that it doesn't stand anywhere. Merging doesn't solve the fact that the actual "Friendly AI" theory is
1596:
All you did was repeat what I've said above at least four times. And, again, these are not "personal attacks". This is all externally verifiable information. It is canvasing because
85: 511:
Amnon H. Eden (Editor), James H Moor (Editor), Johnny H Soraker (Editor)) were academically peer-reviewed, including "Eliezer's Friendly" AI paper; and critical comments on it. --
3645:
of the Springer volume. On the MIRI staff page you will find the names: Helm, Bostrom, Yudowsky, Muehlhauser. Bostrom's connection to Pearce is public knowledge, but can be shown
2544:
in the numerous sources, which, at first glance appear independent, but are actually by the same group of people working in concert. All of this has been documented with links. --
1551:(OP) Please let's try to avoid personal attacks. I don't Davidcpearce canvassed Limitingfactor into the discussion, since they voted in opposite ways. Also, in my understading of 1287:. Again, it doesn't require us to form conjecture about your agenda, only to show proximity. Regardless, this does not solve the notability issue of the source, nor the issues of 2784:
and definitely not about the "software safety engineering". If that is where we are headed then that is well outside the scope of this article's conception. Further, please see
1291:
as per the comments above. This has now been repeated several times. I'll be stepping back from this as I believe all that is needed has been shown in all the comments above. --
209: 204: 3569:
be a trivial, passing, tangential, one-sentence mention. This is not the case, in spite of the obvious space constraints imposed by the huge range of topics R/N have to cover.
236: 213: 2844:
acquiring an introductory understanding of the contemporary field of AI. And, yes, it's not just a homonym; Yudkowsky is cited multiple times, the term is capitalized, etc.
394: 1603:
Provide more sources, please. The ones listed are contested because of their non-technical status, and that they don't actually substantiate the theory beyond speculation!
2128:
Criticisms of the architectural/mathematical framework that is "Friendly AI" and "Coherent Extrapolated Volition", which are indeed not notable sourced concepts, and are
2113:
intelligence can be benevolent, which is distinct from the actual non-rigorous concepts presented by "Friendly AI" as a theory. There are two sub-issues to be unpacked:
196: 243: 3453:(OP) Please provide reliable sources. The fact that a meme may have been circulating in online communities is not, by iself, grounds for inclusion in Knowledge (XXG). 1880:
If a conscientious reader starts at the top of this page and follows to the bottom, they will see that careful attention has been paid to separating the fact that the
1283:
Again, David, claims of bad faith are not going to help your case. The statements made are factual and evidence/references have been provided; that is enough to prove
2053:
theory that specifies a kind of architecture for doing this. The Atlantic articles are blog-like, and directly link to the non-notable sources in question as well. --
2259:– Friendly AI is a concept in transhumanist philosophy, under widespread discussion in that field and the field of AI. I've never read that the concept itself is a 508:
Lightbound, for better or worse, all of the essays commissioned for the recent Springer volume ("Singularity Hypotheses: A Scientific and Philosophical Assessment:
999:
I would like to propose a final closing perspective, which is independent of my former arguments and notwithstanding them. Consider this article as an analogy to
2513:
argument. Also, your prolific replies to everybody imply that you think you can win by shear volume. But whether you acknowledge it or not, the generic topic
1354:
Comment dated 5th September 2013 ″Warning: Richard Loosemore is a known permanent idiot, ponder carefully before deciding to spend much time arguing with him.″
2776:
and can't seem to find even a sub-heading on the mention of "Friendly AI" as a theory. Could you show a page number? Does this source substantially cover the
2627:, because most of machine ethics is concerned with the behavior of narrow AI or approximately human-level AI, not with the behavior of superintelligent AI.) - 2459:, let alone secondary and tertiary independent sources. I'm afraid I'm not sure what we could even do with this article except to make it a redirect into 2451:
that they attempt mathematical modeling and theories. But the problem is that there is not even a primary source that specifies the rigorous mathematical
2132:. This is also clear given that these concepts as an architecture are often presented or introduced in the context of science fiction/laws of robotics. 1479:
in my view. And you continue to pump them when we've asked that you provide at least a few alternatives. But we know why that isn't going to happen! --
1719:...and adopted by big-name Oxford professor. There are powerful arguments against singleton AGI; Eliezer Yudkowsky's home-schooling isn't one of them. 3743: 3565:
demonstrates to me that you haven't looked at the source text I cited, yet are still making strong claims about it based on some intuition that it
2772:
These are not answerable in a way that would justify what it is you are attempting to do. The source from Russell and Norvig? I just looked at the
2755:"most of machine ethics is concerned with the behavior of narrow AI or approximately human-level AI, not with the behavior of superintelligent AI" 3073:'Strong AI' is a redirect because it's ambiguous, not because it's non-noteworthy. So I don't see any direct relevance to the term 'Friendly AI'. 1326:
Perhaps I should add - without claiming to know all the details - that I am troubled by the lack of courtesy shown to Richard Loosemore below. --
2001:"publishing boutique" that may be reputable. Based on the two New Atlantis articles and the nexus book, this topic looks marginally notable per 967:
How would ignorance of - or a mere nodding acquaintance with - the topic and the source material serve as a better qualification for an opinion?
2540:, which would have been a compromise. My "prolific responses" are due to the initial canvasing that took place and defending against issues of 472:, the minimum condition is that the information can be verified from a notable source. This strengthens the deletion argument, as there are no 424:
article is almost entirely about specific ideas put forward by Yudkowsky and his institute. They may be notable enough to deserve a mention in
298: 3151:? You cite policy and guideline names, but in strange contexts that don't seem to have much to do with the contents of the WP-namespace pages. 622:
The source you are referring to has already been discredited with multiple links within the comments here with verifiable links and quotes. --
477:
difficult to establish it to an acceptable quality due to the immense falsehood of the topic. This kind of undue weight issue is mentioned in
3197: 1943:
with the above comment. What LimitingFactor is alluding to at the end of his comment is explained by philosopher Daniel Dennett in his paper
265: 200: 2816:
to do so. This is their own words. It is exactly opposite of the claims you are making and by the very people pushing this fringe theory. --
1166:
that you helped plan the book. That you weren't merely a contributor who happened to not know anyone involved. This proves the proximity of
3454: 2655: 2087: 1663: 1562: 1178:
This is useful knowledge to anyone making a judgement on this page. Of the two citations you brought to the table to use, both of them are
754: 94: 3246:
is not a mainstream view in contemporary economics, and is a 'theory' -- a fringe one, at that -- yet 'Marxism' gets its own page. Ditto
2961:"the book you reference does not provide the mathematical theorem or mathematical conjecture of "Friendly AI" theory" - This seems to be 2372:. My emphasis has been added so it is crystal clear. See, this is part of the problem. There is a pseudoscientific "theory" (read: not a 2894:
was deleted. In which I explicitly did substantiate the question they are asking. I'm going to have to go through the log to find it. --
2376:) called "Friendly AI" and then there is the adjective enhancing AI that refers to the concept, practice, or goal of making an AI friend 2267:, yes. Much of the article is unverified, and rather than the whole article being deleted, unverified statements can be challenged via 1012: 330: 3670: 3642: 3507: 964:
As to your very different charge of having "a close relationship with the source material, topic, and reference", well, yes! Don't you?
882:. That a source is from a major publication does not automatically make it sufficient to establish the due dilligence in the spirit of 124: 2724: 17: 3080:...! Likewise 'alternating current was proposed as the right way to transmit electric charge' is not a reason to delete the article 52:. Some issues with article, but it appears editors are working them out, from reading the discussion I believe a keep is warranted 3709: 3691: 3604: 3540: 3462: 3441: 3407: 3360: 3330: 3294: 3278: 3223: 3179: 3139: 3100: 3037: 2982: 2944: 2908: 2884: 2830: 2732: 2701: 2663: 2636: 2581: 2558: 2523: 2501: 2431: 2399: 2339: 2311: 2278: 2247: 2220: 2195: 2160: 2095: 2067: 2030: 1961: 1929: 1904: 1863: 1830: 1804: 1778: 1735: 1707: 1671: 1619: 1570: 1528: 1493: 1457: 1435: 1399: 1384: 1365: 1335: 1305: 1270: 1251: 1226: 1196: 1133: 1100: 1045: 982: 948: 919: 904: 852: 817: 762: 715: 675: 636: 601: 567: 520: 499: 446: 406: 386: 366: 342: 61: 259: 2607: 2005:. The article is essay-like in parts and I agree with DGG that it is a bit promotional, but these are surmountable problems, per 1023:
for the practical side. But these are basic facts within the field, and this basic nature is part of the problem of establishing
421: 192: 67: 3576:(which could be used to establish COI, or, equally, to establish relevant expertise). I don't see anything about whether or why 2616: 2468: 1983: 879: 800: 3425:
Fringe or not, this concept is a relevant subject of debate. The topic was discussed in the 1990s already, in the context of
3381:
Indeed, I would expect a very long discussion like this to affect my stats, as I have been away from the Knowledge (XXG) for
3347:@Lightbound: Your edits connected to this AfD make up over 11% of your contributions to Knowledge (XXG). I suggest you read 3159: 2507:
argue against Yudkowsky even after he's been largely removed from the article, is starting to look like you are attempting a
2121:; the attempt to rebrand a concept and redefine what it means when its always been about what is already being covered under 255: 3257: 1847:
Second, I did not become an editor in order to comment here: I have been registered as a Knowledge (XXG) editor since 2006.
1652:
created by Yudkowsky to encompass a number of arguments he and people closely associated to him have made on the subject of
110: 961:
Lighthound, the ad hominems are completely out of place. I have no affiliations whatsoever with MIRI or Eliezer Yudkowsky.
305: 3764: 3163: 1978: 1844:
research. I came here because there was a discussion in progress, and I felt that I had relevant information to offer.
1723: 40: 3289: 2576: 2518: 2426: 2334: 2273: 3562: 3495: 2767:
How much is "most"? At what point does that subjective interpretation become justified in a neutral observer's eyes?
706:
on the topic of Friendly AI" is factually incorrect. It's a claim that you might on reflection wish to withdraw. --
592:
mathematical disproof of a published, peer-reviewed academic anthology? Have you even read the book in question? -
83:
If you came here because someone asked you to, or you read a message on another website, please note that this is
3002:. I know this is confusing, but that is due to the unfortunate naming of it. Here is the direct evidence that it 2264: 2191: 1994:
chapter 4 of the book "Singularity Rising: Surviving and Thriving in a Smarter, Richer, and More Dangerous World"
3662: 3256:"So much evidence at this point I can't see any reasonable editor continuing to contradict it in good faith." - 2801: 2448: 2006: 1081:
with this topic as I did not create the theory nor contribute or collaborate with others who did. The spirit of
156: 3658: 3458: 2659: 2091: 1925: 1859: 1667: 1566: 1380: 1361: 758: 3654: 3630: 3522:
by editing main article, but still have not brought numerable sources to overcome objections. Opinions on the
3511: 1179: 1167: 1082: 1065: 271: 2875:'equations' or 'really powerful optimization processes' when we're tempted to overly anthropomorphize them. - 3356: 3201: 2728: 2456: 796: 3646: 2651: 1659: 1558: 750: 140: 114: 3739: 3196:, which now redirects to this page and had previously been nominated for deletion. Here is the discussion 3147:"As for your WP:OR theory that we ought not merge or make it a POV in machine ethics" - ??? Have you read 2217: 1731: 1524: 1453: 1395: 1331: 1266: 1222: 1129: 978: 915: 848: 711: 516: 442: 362: 1921: 1855: 1376: 1357: 1323:
ideally by someone less critical of the MIRI perspective than either of us, i.e. a neutral point of view.
541:, and any Knowledge (XXG) article that would feature it would immediately have to contend with issues of 99: 3760: 3155: 3067: 2026: 1648:(OP) Just to restate my case for the deletion proposal, it seems that this "Friendly AI" is a neologism 1004: 36: 3348: 2571:
You're arguing against the article based on material that isn't even included in it anymore. That's a
2479:, which is what this page would quickly deflate to since we have now established it is an attempt at a 2263:. A hypothetical technology, yes. A scientific research objective, yes. A potential solution to the 2010: 1727: 1520: 1449: 1391: 1327: 1262: 1218: 1125: 974: 911: 844: 707: 512: 438: 358: 3689: 3538: 3405: 3328: 3221: 3137: 3035: 2942: 2906: 2868:
and definitely not about the 'software safety engineering'" - Can you cite a source that shows this?
2828: 2556: 2499: 2397: 2309: 2245: 2187: 2158: 2065: 1959: 1902: 1828: 1776: 1617: 1491: 1433: 1303: 1249: 1194: 1098: 1043: 946: 902: 815: 634: 565: 497: 3523: 2364:, and I quote the words of the creator of this "theory" and neologism from that non-notable source: 883: 461: 3234: 3193: 3081: 1703: 1020: 291: 3650: 3621: 3503: 3499: 3230: 3189: 3015: 2762: 2040:
None of those articles above substantiate and rigorously define the concept of "Friendly AI" as a
1793:
article needs is some editing for clarity. (and not mentioning the creator's name quite as often)
1694:
Neologism created by Eliezer Yudkowsky. Can be more than adequately covered in articles about the
1414: 3352: 3247: 2599:, as it's discussed extensively in the leading textbook in the field of AI, Russell and Norvig's 792: 146: 77: 3591:
nothing were useful for establishing notability, the article would need to go. Go actually read
2472: 542: 478: 469: 465: 1353: 3733: 3705: 3600: 3555: 3483: 3274: 3175: 3096: 2978: 2880: 2697: 2632: 2452: 2373: 2211: 2105: 2041: 1016: 787: 671: 597: 402: 382: 338: 29:
The following discussion is an archived debate of the proposed deletion of the article below.
3759:
Subsequent comments should be made on the appropriate discussion page (such as the article's
3616: 3592: 3583: 3519: 3313: 2969: 2926: 2777: 2480: 2230: 2129: 1418: 1070: 1024: 550: 546: 538: 35:
Subsequent comments should be made on the appropriate discussion page (such as the article's
2965:
on your part. There is no such thing as a 'Friendly AI theorem' or 'Friendly AI conjecture'.
2719: 2022: 1000: 57: 2476: 2422: 2287:
Knowledge (XXG) is not a sounding board for our opinions, nor a discussion forum to debate
2268: 2170: 2138: 2118: 2109: 2049: 2045: 2014: 2002: 1886: 1881: 1649: 1552: 1476: 1409: 1284: 1163: 1159: 1078: 1074: 930: 887: 534: 3678: 3527: 3491: 3394: 3317: 3260:
is one of the many community norms you need to spend a bit more time with. If you find it
3210: 3126: 3024: 2931: 2895: 2817: 2545: 2488: 2386: 2298: 2234: 2147: 2054: 1948: 1891: 1817: 1765: 1606: 1480: 1422: 1292: 1238: 1183: 1087: 1032: 1008: 935: 891: 804: 623: 585: 554: 486: 3551: 3479: 3148: 3114: 2962: 2789: 2541: 2288: 2183: 2142: 1944: 1288: 1028: 589: 457: 3634: 1757: 3434: 3118: 3020: 2968:"no one, not a single person, has been able to provide a source that substantiates the 2620: 2612: 2460: 2381: 2293: 2206: 2122: 2078: 2018: 1699: 1653: 744: 425: 417: 320: 3572:
Your self-citation seems to only be about David Pearce and whether he was involved in
3430: 1800: 973:
This debate is lame; our time could be more usefully spent strengthening the entry.--
429: 2104:
The issue with a merge is that there still isn't a significant source on the actual
1724:
http://journals.cambridge.org/action/displayAbstract?fromPage=online&aid=9126040
456:
There are several issues here: the first is that Friendly AI is and always has been
3701: 3596: 3426: 3270: 3171: 3092: 2974: 2918: 2891: 2876: 2693: 2628: 2484: 667: 593: 398: 378: 334: 174: 162: 130: 2788:(diff) pointing out that this article's topic is about an attempt at establishing 230: 1989:
section 5.3 of the book "The Nexus between Artificial Intelligence and Economics"
109:
However, you are invited to participate and your opinion is welcome. Remember to
3063: 2624: 53: 3393:, but it is because evidence is being ignored. Nothing further can be done. -- 3023:
as it certainly is not widely accepted as part of the scientific consensus. --
2870: 2533: 1698:
elementary school graduate who invented the term or his Harry Potter fanclub.
468:
as a verifiable topic, but I don't think anyone would object to it. Third, in
2615:, and by AI theorists in publications and conference proceedings surrounding 3526:
and subjective importance of the topic do not overcome sourcing concerns. --
2773: 2572: 2509: 2464: 2108:
of "Friendly AI". Thus, the merge would be based on a concept entailed by a
1748:
It is a primary source with a close relationship. One of the authors is the
1390:
Limitingfactor, many thanks, you're probably right; I should let it pass. --
1174:
who has been actively involved behind the scenes in the planning of the book
2860:. There's a paper or two that discuss superintelligence, but most do not. " 786:
David, there is not a single primary, peer-reviewed journal article on the
747:, but not notable and verifiable enough to deserve an article on its own. 3312:. So, attempting to frame the "deleting every page you see" bit is simply 1598:
he is bringing people into the discussion from outside the Knowledge (XXG)
3438: 3003: 2793: 2361: 1795: 3506:
from "significant multiple independent sources". Source from Springer,
3243: 3077: 3012:
that describes Friendliness, the objective or thing-we’re-trying-to-do"
2812:
In fact, the entire post there is exactly about this, that it would be
2370:
that describes Friendliness, the objective or thing-we’re-trying-to-do"
739:
I think that the subject may be notable enough to deserve a mention in
3674: 929:
that information through a few Google searches. It is sufficient for
3433:. Current article does need balancing. I don't see how a merge with 2445: 661:
I don't know which refutation you're referring to; to my knowledge,
3651:
here, under the FAQ of Humanity+, the organization they co-founded
2182:(but not sure what to merge into). As it is, article is based on 1064:
David, pointing out to administrators that you may be involved in
872:
actual rigorous mathematical conjecture or scientific theory paper
2444:
The creator(s) of "Friendly AI" theory have explicitly stated in
2017:, preservation of verifiable material is preferable to deletion. 3753:
The above discussion is preserved as an archive of the debate.
3638: 2797: 2209:. Not enough strong sourcing for this to merit its own article. 1749: 870:
I still strongly support deletion. David, feel free to cite the
740: 3066:(the topic being: a specific class of hypothetical agent), not 2601: 2575:
argument. The article is well on its way to being repaired.
1217:
This critical stance is not evidence of bias in its favour! --
416:(I'm the user who proposed the deletion) There is already the 72: 1408:
Limitingfactor and David are clearly choosing to ignore what
699:
wasn't one of the editors, all of whom are career academics.
103:(agreement) is gauged based on the merits of the arguments, 1162:. Again, the issue isn't just intention but proximity. And 3233:
theory? I just found out this very page used to be called
3192:
theory? I just found out this very page used to be called
2718:(OP) Sorry, but it seems to me that you are going through 1753: 1597: 1984:
a New Atlantis journal article, reply to previous article
93:
among Knowledge (XXG) contributors. Knowledge (XXG) has
2761:. The following is rhetorical, but explains why we have 3657:
means fully and completely independent, not merely the
3515: 3475: 2922: 2785: 226: 222: 218: 3665:
against me for pointing out these journalistic facts.
3490:, does not substantially cover topic (because it is a 3437:
would improve. Therefore I vote to keep and expand. —
1939:
for dropping that previous line of discourse. I am in
1514:
Lighthound, you've left me scratching my head. I am a
290: 3474:
Sourcing problems still apparent, e.g. Norvig source
3673:"Content Level: Popular/general" and is part of the 1762:
devoid of logical or mathematical rigor on the topic
3661:. I suspect this is why there has been such heated 3117:theory that we ought not merge or make it a POV in 375:
list of Social science-related deletion discussions
304: 3385:and have traditionally been a light editor. Also, 3494:) in way that "removes need for research" as per 2265:existential risk of the technological singularity 2021:would be a reasonable target for such a merge. -- 1172:"He will be joined as a speaker by David Pearce, 43:). No further edits should be made to this page. 3767:). No further edits should be made to this page. 2186:that does not have secondary reliable sources. 1176:, and who contributed two articles in the book." 2810:be useful as input for real world engineering." 357:Several of the external links need updating. -- 3518:indicate with evidence. Editors attempting to 2973:is encyclopedic, significant, verified, etc. - 2385:distinction that is not without difference. -- 1235:Otherwise, I still strongly recommend deletion 395:list of Computing-related deletion discussions 481:. Therefore, and in light of these issues, I 123:Comments may be tagged as follows: suspected 8: 3308:becoming difficult to see good faith. Also, 393:Note: This debate has been included in the 373:Note: This debate has been included in the 2532:The term you would be looking for would be 3310:I did not nominate this page to be deleted 2602:Artificial Intelligence: A Modern Approach 392: 372: 97:regarding the encyclopedia's content, and 970:How else can one make informed criticism? 3387:entry length does not model contribution 2137:Thus, trying to merge doesn't solve the 1158:You may want to review what is meant by 117:on this page by adding ~~~~ at the end. 533:I'm afraid I'm going to have to invoke 2619:(AGI). The topic can't be merged into 3647:here, as an article from The Guardian 3004:is a claim to a theory by its creator 2917:I've now updated my response to you, 1003:, but before we knew that it was an " 18:Knowledge (XXG):Articles for deletion 7: 3060:It is a theory and it is also a goal 3000:It is a theory and it is also a goal 2605:. The idea is also discussed in the 1086:article can not stand on its own. -- 329:- I completed the nomination for IP 3008:"This is an update to that part of 2800:of MIRI explicitly stating this in 2366:"This is an update to that part of 702:Either way, to say that there are " 420:article covering these issues. The 2792:as a "theory". I provided, above, 1971:or merge. Secondary sources found: 795:or, at the very least, a rigorous 24: 3258:Knowledge (XXG):Assume good faith 3164:Friendliness in artificial agents 2794:evidence of a claim to a "theory" 1945:The Higher Order Truths of Chmess 704:no primary, peer-reviewed sources 3629:again, it is not just violating 2608:Journal of Consciousness Studies 2515:friendly artificial intelligence 422:Friendly artificial intelligence 193:Friendly artificial intelligence 76: 68:Friendly artificial intelligence 3724:Recommendation for Reformatting 2871:This Luke Muehlhauser interview 2617:artificial general intelligence 2471:. That would at least not give 2469:Artificial General Intelligence 1013:Gödel's incompleteness theorems 3160:Coherent extrapolated volition 1979:a New Atlantis journal article 1015:for the theoretical side, and 474:primary, peer-reviewed sources 1: 3510:, proven to have issues with 2753:First, about your statement: 113:on the part of others and to 3391:I'm stepping back regardless 3728:(this section moved to the 2455:of "Friendly AI" or even a 1417:, David, and bring in more 3784: 3675:"The Frontiers Collection" 3659:appearance of independence 3595:. (And the new sources.) - 2279:23:25, 31 March 2014 (UTC) 2248:18:54, 31 March 2014 (UTC) 2221:18:20, 31 March 2014 (UTC) 2196:02:20, 31 March 2014 (UTC) 2161:00:28, 31 March 2014 (UTC) 2096:00:02, 31 March 2014 (UTC) 2068:22:30, 30 March 2014 (UTC) 2031:22:21, 30 March 2014 (UTC) 1962:22:10, 30 March 2014 (UTC) 1930:21:49, 30 March 2014 (UTC) 1905:21:25, 30 March 2014 (UTC) 1864:21:19, 30 March 2014 (UTC) 1831:19:47, 30 March 2014 (UTC) 1805:19:42, 30 March 2014 (UTC) 1779:20:55, 30 March 2014 (UTC) 1736:20:01, 30 March 2014 (UTC) 1708:19:32, 30 March 2014 (UTC) 1672:19:07, 30 March 2014 (UTC) 1620:18:57, 30 March 2014 (UTC) 1571:18:49, 30 March 2014 (UTC) 1529:18:46, 30 March 2014 (UTC) 1494:18:22, 30 March 2014 (UTC) 1458:18:07, 30 March 2014 (UTC) 1436:17:50, 30 March 2014 (UTC) 1400:16:46, 30 March 2014 (UTC) 1385:16:29, 30 March 2014 (UTC) 1366:16:15, 30 March 2014 (UTC) 1336:16:43, 30 March 2014 (UTC) 1306:08:58, 30 March 2014 (UTC) 1271:08:18, 30 March 2014 (UTC) 1252:02:35, 30 March 2014 (UTC) 1227:02:23, 30 March 2014 (UTC) 1197:01:59, 30 March 2014 (UTC) 1134:01:36, 30 March 2014 (UTC) 1101:01:08, 30 March 2014 (UTC) 1046:00:42, 30 March 2014 (UTC) 983:00:59, 30 March 2014 (UTC) 949:00:06, 30 March 2014 (UTC) 920:23:58, 29 March 2014 (UTC) 905:23:21, 29 March 2014 (UTC) 853:23:06, 29 March 2014 (UTC) 818:22:26, 29 March 2014 (UTC) 763:21:14, 29 March 2014 (UTC) 716:22:03, 29 March 2014 (UTC) 568:21:12, 29 March 2014 (UTC) 521:20:43, 29 March 2014 (UTC) 500:20:22, 29 March 2014 (UTC) 447:08:17, 29 March 2014 (UTC) 407:23:28, 28 March 2014 (UTC) 387:23:28, 28 March 2014 (UTC) 367:21:38, 28 March 2014 (UTC) 343:19:00, 28 March 2014 (UTC) 3710:00:06, 3 April 2014 (UTC) 3692:22:01, 2 April 2014 (UTC) 3641:by cross-referencing the 3605:16:42, 2 April 2014 (UTC) 3580:is not a reliable source. 3541:12:54, 2 April 2014 (UTC) 3463:13:51, 1 April 2014 (UTC) 3442:11:44, 1 April 2014 (UTC) 3408:19:15, 1 April 2014 (UTC) 3361:11:43, 1 April 2014 (UTC) 3331:10:11, 1 April 2014 (UTC) 3295:09:36, 1 April 2014 (UTC) 3279:09:16, 1 April 2014 (UTC) 3224:07:31, 1 April 2014 (UTC) 3180:09:16, 1 April 2014 (UTC) 3140:07:14, 1 April 2014 (UTC) 3101:07:03, 1 April 2014 (UTC) 3038:06:47, 1 April 2014 (UTC) 2983:06:15, 1 April 2014 (UTC) 2945:06:04, 1 April 2014 (UTC) 2909:05:56, 1 April 2014 (UTC) 2885:05:46, 1 April 2014 (UTC) 2831:05:03, 1 April 2014 (UTC) 2733:22:53, 1 April 2014 (UTC) 2702:17:08, 1 April 2014 (UTC) 2664:13:33, 1 April 2014 (UTC) 2637:04:45, 1 April 2014 (UTC) 2582:03:28, 2 April 2014 (UTC) 2559:09:50, 1 April 2014 (UTC) 2524:09:24, 1 April 2014 (UTC) 2502:05:53, 1 April 2014 (UTC) 2432:05:39, 1 April 2014 (UTC) 2400:03:13, 1 April 2014 (UTC) 2340:02:45, 1 April 2014 (UTC) 2324:You didn't present it as 2312:00:23, 1 April 2014 (UTC) 2171:Blow it up and start over 676:10:23, 1 April 2014 (UTC) 637:09:58, 1 April 2014 (UTC) 602:09:34, 1 April 2014 (UTC) 62:06:58, 5 April 2014 (UTC) 3756:Please do not modify it. 3508:"Singularity Hypothesis" 32:Please do not modify it. 3669:The Springer volume is 3202:mathematical conjecture 2890:Some of my response to 2814:"unethical and stupid " 2804:(literally, days ago): 2457:mathematical conjecture 1005:epistemic impossibility 155:; accounts blocked for 125:single-purpose accounts 95:policies and guidelines 3578:Singularity Hypotheses 3574:Singularity Hypotheses 3488:existence ≠ notability 3267:argumentative skeleton 663:Singularity Hypotheses 2611:by philosophers like 3516:comment above (diff) 3351:and give it a rest. 2328:of a theory, but as 2044:beyond merely being 1164:here is the evidence 3235:Friendliness Theory 3194:Friendliness Theory 3082:Alternating current 2862:How much is "most"? 2362:a claim to a theory 1021:reverse engineering 880:stand-alone article 801:stand-alone article 107:by counting votes. 86:not a majority vote 3476:added in this diff 3248:intelligent design 3156:Friendly AI theory 3068:Friendly AI theory 3010:Friendly AI theory 2368:Friendly AI theory 793:mathematical proof 483:strongly recommend 48:The result was 3746: 3563:WP:TRIVIALMENTION 3561:Your citation of 3496:WP:TRIVIALMENTION 3291:The Transhumanist 2866:to make decisions 2782:to make decisions 2774:table of contents 2654:comment added by 2578:The Transhumanist 2520:The Transhumanist 2428:The Transhumanist 2336:The Transhumanist 2294:article namespace 2275:The Transhumanist 2184:original research 2086:seem reasonable. 1758:only 7 pages long 1662:comment added by 1561:comment added by 788:scientific theory 753:comment added by 590:original-research 409: 389: 188: 187: 184: 111:assume good faith 3775: 3758: 3744: 3687: 3684: 3681: 3536: 3533: 3530: 3403: 3400: 3397: 3326: 3323: 3320: 3219: 3216: 3213: 3198:on the talk page 3135: 3132: 3129: 3033: 3030: 3027: 2940: 2937: 2934: 2904: 2901: 2898: 2826: 2823: 2820: 2802:a recent article 2720:No true Scotsman 2666: 2554: 2551: 2548: 2497: 2494: 2491: 2395: 2392: 2389: 2307: 2304: 2301: 2243: 2240: 2237: 2215: 2156: 2153: 2150: 2063: 2060: 2057: 1957: 1954: 1951: 1900: 1897: 1894: 1826: 1823: 1820: 1774: 1771: 1768: 1750:director at MIRI 1674: 1615: 1612: 1609: 1573: 1489: 1486: 1483: 1431: 1428: 1425: 1301: 1298: 1295: 1247: 1244: 1241: 1192: 1189: 1186: 1096: 1093: 1090: 1041: 1038: 1035: 1001:perpetual motion 944: 941: 938: 900: 897: 894: 813: 810: 807: 765: 632: 629: 626: 563: 560: 557: 495: 492: 489: 309: 308: 294: 246: 234: 216: 182: 170: 154: 138: 119: 89:, but instead a 80: 73: 34: 3783: 3782: 3778: 3777: 3776: 3774: 3773: 3772: 3771: 3765:deletion review 3754: 3726: 3685: 3682: 3679: 3663:WP:POV RAILROAD 3534: 3531: 3528: 3492:tertiary source 3401: 3398: 3395: 3324: 3321: 3318: 3217: 3214: 3211: 3133: 3130: 3127: 3031: 3028: 3025: 3006:, and I quote: 2938: 2935: 2932: 2902: 2899: 2896: 2824: 2821: 2818: 2649: 2595:. The topic is 2552: 2549: 2546: 2538:delete or merge 2495: 2492: 2489: 2467:redirects into 2393: 2390: 2387: 2305: 2302: 2299: 2241: 2238: 2235: 2213: 2188:Robert McClenon 2154: 2151: 2148: 2084:Delete or Merge 2075:Delete or Merge 2061: 2058: 2055: 2007:WP:SURMOUNTABLE 1955: 1952: 1949: 1898: 1895: 1892: 1824: 1821: 1818: 1772: 1769: 1766: 1657: 1613: 1610: 1607: 1556: 1487: 1484: 1481: 1429: 1426: 1423: 1299: 1296: 1293: 1245: 1242: 1239: 1190: 1187: 1184: 1094: 1091: 1088: 1039: 1036: 1033: 1009:Halting problem 942: 939: 936: 898: 895: 892: 811: 808: 805: 748: 630: 627: 624: 561: 558: 555: 493: 490: 487: 251: 242: 207: 191: 172: 160: 144: 128: 115:sign your posts 71: 41:deletion review 30: 22: 21: 20: 12: 11: 5: 3781: 3779: 3770: 3769: 3725: 3722: 3721: 3720: 3719: 3718: 3717: 3716: 3715: 3714: 3713: 3712: 3698: 3655:WP:INDEPENDENT 3631:WP:INDEPENDENT 3608: 3607: 3581: 3570: 3559: 3544: 3543: 3512:WP:INDEPENDENT 3468: 3467: 3466: 3465: 3455:131.114.88.192 3445: 3444: 3435:machine ethics 3419: 3418: 3417: 3416: 3415: 3414: 3413: 3412: 3411: 3410: 3370: 3369: 3368: 3367: 3366: 3365: 3364: 3363: 3338: 3337: 3336: 3335: 3334: 3333: 3300: 3299: 3298: 3297: 3282: 3281: 3254: 3251: 3239: 3183: 3182: 3170:topic of X'. - 3162:, and perhaps 3152: 3145: 3119:machine ethics 3112: 3111: 3110: 3109: 3108: 3107: 3106: 3105: 3104: 3103: 3088: 3085: 3074: 3071: 3047: 3046: 3045: 3044: 3043: 3042: 3041: 3040: 3021:machine ethics 2990: 2989: 2988: 2987: 2986: 2985: 2966: 2959: 2950: 2949: 2948: 2947: 2912: 2911: 2857:Machine Ethics 2842: 2841: 2840: 2839: 2838: 2837: 2836: 2835: 2834: 2833: 2742: 2741: 2740: 2739: 2738: 2737: 2736: 2735: 2709: 2708: 2707: 2706: 2705: 2704: 2684: 2683: 2682: 2681: 2680: 2679: 2670: 2669: 2668: 2667: 2656:131.114.88.192 2640: 2639: 2621:Machine ethics 2613:David Chalmers 2589: 2588: 2587: 2586: 2585: 2584: 2564: 2563: 2562: 2561: 2527: 2526: 2461:machine ethics 2443: 2442: 2441: 2440: 2439: 2438: 2437: 2436: 2435: 2434: 2409: 2408: 2407: 2406: 2405: 2404: 2403: 2402: 2382:machine ethics 2347: 2346: 2345: 2344: 2343: 2342: 2317: 2316: 2315: 2314: 2282: 2281: 2253: 2252: 2251: 2250: 2224: 2223: 2207:Machine ethics 2199: 2198: 2166: 2165: 2164: 2163: 2135: 2134: 2133: 2126: 2123:machine ethics 2099: 2098: 2088:131.114.88.192 2079:Machine ethics 2071: 2070: 2034: 2033: 2019:Machine ethics 1998: 1997: 1996: 1991: 1986: 1981: 1973: 1972: 1965: 1964: 1922:LimitingFactor 1914: 1913: 1912: 1911: 1910: 1909: 1908: 1907: 1871: 1870: 1869: 1868: 1867: 1866: 1856:LimitingFactor 1854:accusations. 1851: 1848: 1845: 1836: 1835: 1834: 1833: 1808: 1807: 1786: 1785: 1784: 1783: 1782: 1781: 1741: 1740: 1739: 1738: 1720: 1711: 1710: 1688: 1687: 1686: 1685: 1684: 1683: 1682: 1681: 1680: 1679: 1678: 1677: 1676: 1675: 1664:131.114.88.192 1654:Machine ethics 1633: 1632: 1631: 1630: 1629: 1628: 1627: 1626: 1625: 1624: 1623: 1622: 1583: 1582: 1581: 1580: 1579: 1578: 1577: 1576: 1575: 1574: 1563:131.114.88.192 1540: 1539: 1538: 1537: 1536: 1535: 1534: 1533: 1532: 1531: 1503: 1502: 1501: 1500: 1499: 1498: 1497: 1496: 1465: 1464: 1463: 1462: 1461: 1460: 1441: 1440: 1439: 1438: 1403: 1402: 1377:LimitingFactor 1369: 1368: 1358:LimitingFactor 1345: 1344: 1343: 1342: 1341: 1340: 1339: 1338: 1324: 1313: 1312: 1311: 1310: 1309: 1308: 1276: 1275: 1274: 1273: 1255: 1254: 1210: 1209: 1208: 1207: 1206: 1205: 1204: 1203: 1202: 1201: 1200: 1199: 1180:WP:EXTERNALREL 1168:WP:EXTERNALREL 1145: 1144: 1143: 1142: 1141: 1140: 1139: 1138: 1137: 1136: 1122: 1110: 1109: 1108: 1107: 1106: 1105: 1104: 1103: 1083:WP:EXTERNALREL 1066:WP:EXTERNALREL 1055: 1054: 1053: 1052: 1051: 1050: 1049: 1048: 990: 989: 988: 987: 986: 985: 971: 968: 965: 962: 954: 953: 952: 951: 923: 922: 866: 865: 864: 863: 862: 861: 860: 859: 858: 857: 856: 855: 829: 828: 827: 826: 825: 824: 823: 822: 821: 820: 775: 774: 773: 772: 771: 770: 769: 768: 767: 766: 755:131.114.88.192 745:Machine ethics 737: 725: 724: 723: 722: 721: 720: 719: 718: 700: 689: 688: 687: 686: 685: 684: 683: 682: 681: 680: 679: 678: 648: 647: 646: 645: 644: 643: 642: 641: 640: 639: 611: 610: 609: 608: 607: 606: 605: 604: 575: 574: 573: 572: 571: 570: 526: 525: 524: 523: 509: 503: 502: 450: 449: 426:Machine ethics 418:Machine ethics 411: 410: 390: 355: 354: 346: 345: 324: 321:Machine ethics 312: 311: 248: 186: 185: 81: 70: 65: 46: 45: 25: 23: 15: 14: 13: 10: 9: 6: 4: 3: 2: 3780: 3768: 3766: 3762: 3757: 3751: 3750: 3749: 3747: 3741: 3737: 3736: 3731: 3723: 3711: 3707: 3703: 3699: 3695: 3694: 3693: 3690: 3688: 3676: 3672: 3668: 3664: 3660: 3656: 3652: 3648: 3644: 3640: 3636: 3632: 3628: 3623: 3618: 3614: 3613: 3612: 3611: 3610: 3609: 3606: 3602: 3598: 3594: 3590: 3585: 3582: 3579: 3575: 3571: 3568: 3564: 3560: 3557: 3553: 3548: 3547: 3546: 3545: 3542: 3539: 3537: 3525: 3521: 3517: 3513: 3509: 3505: 3501: 3497: 3493: 3489: 3485: 3481: 3477: 3473: 3470: 3469: 3464: 3460: 3456: 3452: 3449: 3448: 3447: 3446: 3443: 3440: 3436: 3432: 3431:transhumanism 3428: 3424: 3421: 3420: 3409: 3406: 3404: 3392: 3388: 3384: 3380: 3379: 3378: 3377: 3376: 3375: 3374: 3373: 3372: 3371: 3362: 3358: 3354: 3350: 3346: 3345: 3344: 3343: 3342: 3341: 3340: 3339: 3332: 3329: 3327: 3315: 3311: 3306: 3305: 3304: 3303: 3302: 3301: 3296: 3293: 3292: 3286: 3285: 3284: 3283: 3280: 3276: 3272: 3268: 3263: 3262:inconceivable 3259: 3255: 3252: 3249: 3245: 3240: 3236: 3232: 3228: 3227: 3226: 3225: 3222: 3220: 3208: 3203: 3199: 3195: 3191: 3187: 3181: 3177: 3173: 3169: 3165: 3161: 3157: 3153: 3150: 3146: 3143: 3142: 3141: 3138: 3136: 3124: 3120: 3116: 3102: 3098: 3094: 3089: 3086: 3083: 3079: 3075: 3072: 3069: 3065: 3061: 3057: 3056: 3055: 3054: 3053: 3052: 3051: 3050: 3049: 3048: 3039: 3036: 3034: 3022: 3017: 3013: 3011: 3005: 3001: 2998: 2997: 2996: 2995: 2994: 2993: 2992: 2991: 2984: 2980: 2976: 2971: 2967: 2964: 2960: 2956: 2955: 2954: 2953: 2952: 2951: 2946: 2943: 2941: 2928: 2924: 2923:original edit 2921:to my actual 2920: 2916: 2915: 2914: 2913: 2910: 2907: 2905: 2893: 2889: 2888: 2887: 2886: 2882: 2878: 2873: 2872: 2867: 2863: 2859: 2858: 2853: 2850:." - It's my 2849: 2832: 2829: 2827: 2815: 2811: 2809: 2803: 2799: 2795: 2791: 2787: 2786:comment above 2783: 2779: 2775: 2771: 2768: 2764: 2760: 2756: 2752: 2751: 2750: 2749: 2748: 2747: 2746: 2745: 2744: 2743: 2734: 2730: 2726: 2721: 2717: 2716: 2715: 2714: 2713: 2712: 2711: 2710: 2703: 2699: 2695: 2690: 2689: 2688: 2687: 2686: 2685: 2676: 2675: 2674: 2673: 2672: 2671: 2665: 2661: 2657: 2653: 2647: 2644: 2643: 2642: 2641: 2638: 2634: 2630: 2626: 2622: 2618: 2614: 2610: 2609: 2604: 2603: 2598: 2594: 2591: 2590: 2583: 2580: 2579: 2574: 2570: 2569: 2568: 2567: 2566: 2565: 2560: 2557: 2555: 2543: 2539: 2535: 2531: 2530: 2529: 2528: 2525: 2522: 2521: 2516: 2512: 2511: 2505: 2504: 2503: 2500: 2498: 2486: 2485:User: Silence 2482: 2478: 2474: 2470: 2466: 2463:the way that 2462: 2458: 2454: 2450: 2447: 2433: 2430: 2429: 2424: 2419: 2418: 2417: 2416: 2415: 2414: 2413: 2412: 2411: 2410: 2401: 2398: 2396: 2383: 2379: 2375: 2371: 2369: 2363: 2359: 2355: 2354: 2353: 2352: 2351: 2350: 2349: 2348: 2341: 2338: 2337: 2331: 2327: 2323: 2322: 2321: 2320: 2319: 2318: 2313: 2310: 2308: 2295: 2290: 2286: 2285: 2284: 2283: 2280: 2277: 2276: 2270: 2266: 2262: 2258: 2255: 2254: 2249: 2246: 2244: 2232: 2228: 2227: 2226: 2225: 2222: 2219: 2218: 2216: 2208: 2204: 2201: 2200: 2197: 2193: 2189: 2185: 2181: 2177: 2173: 2172: 2168: 2167: 2162: 2159: 2157: 2144: 2140: 2136: 2131: 2127: 2124: 2120: 2115: 2114: 2111: 2107: 2103: 2102: 2101: 2100: 2097: 2093: 2089: 2085: 2080: 2076: 2073: 2072: 2069: 2066: 2064: 2051: 2047: 2043: 2039: 2036: 2035: 2032: 2028: 2024: 2020: 2016: 2012: 2008: 2004: 1999: 1995: 1992: 1990: 1987: 1985: 1982: 1980: 1977: 1976: 1975: 1974: 1970: 1967: 1966: 1963: 1960: 1958: 1946: 1942: 1938: 1934: 1933: 1932: 1931: 1927: 1923: 1918: 1906: 1903: 1901: 1888: 1883: 1879: 1878: 1877: 1876: 1875: 1874: 1873: 1872: 1865: 1861: 1857: 1852: 1849: 1846: 1842: 1841: 1840: 1839: 1838: 1837: 1832: 1829: 1827: 1815: 1812: 1811: 1810: 1809: 1806: 1802: 1798: 1797: 1791: 1788: 1787: 1780: 1777: 1775: 1763: 1759: 1755: 1751: 1747: 1746: 1745: 1744: 1743: 1742: 1737: 1733: 1729: 1725: 1721: 1718: 1715: 1714: 1713: 1712: 1709: 1705: 1701: 1697: 1693: 1692:Strong Delete 1690: 1689: 1673: 1669: 1665: 1661: 1655: 1651: 1647: 1646: 1645: 1644: 1643: 1642: 1641: 1640: 1639: 1638: 1637: 1636: 1635: 1634: 1621: 1618: 1616: 1604: 1599: 1595: 1594: 1593: 1592: 1591: 1590: 1589: 1588: 1587: 1586: 1585: 1584: 1572: 1568: 1564: 1560: 1554: 1550: 1549: 1548: 1547: 1546: 1545: 1544: 1543: 1542: 1541: 1530: 1526: 1522: 1517: 1513: 1512: 1511: 1510: 1509: 1508: 1507: 1506: 1505: 1504: 1495: 1492: 1490: 1478: 1473: 1472: 1471: 1470: 1469: 1468: 1467: 1466: 1459: 1455: 1451: 1447: 1446: 1445: 1444: 1443: 1442: 1437: 1434: 1432: 1420: 1416: 1411: 1407: 1406: 1405: 1404: 1401: 1397: 1393: 1389: 1388: 1387: 1386: 1382: 1378: 1373: 1367: 1363: 1359: 1355: 1350: 1347: 1346: 1337: 1333: 1329: 1325: 1321: 1320: 1319: 1318: 1317: 1316: 1315: 1314: 1307: 1304: 1302: 1290: 1286: 1282: 1281: 1280: 1279: 1278: 1277: 1272: 1268: 1264: 1259: 1258: 1257: 1256: 1253: 1250: 1248: 1236: 1231: 1230: 1229: 1228: 1224: 1220: 1213: 1198: 1195: 1193: 1181: 1177: 1175: 1169: 1165: 1161: 1157: 1156: 1155: 1154: 1153: 1152: 1151: 1150: 1149: 1148: 1147: 1146: 1135: 1131: 1127: 1123: 1120: 1119: 1118: 1117: 1116: 1115: 1114: 1113: 1112: 1111: 1102: 1099: 1097: 1084: 1080: 1076: 1072: 1067: 1063: 1062: 1061: 1060: 1059: 1058: 1057: 1056: 1047: 1044: 1042: 1030: 1026: 1022: 1018: 1014: 1010: 1006: 1002: 998: 997: 996: 995: 994: 993: 992: 991: 984: 980: 976: 972: 969: 966: 963: 960: 959: 958: 957: 956: 955: 950: 947: 945: 932: 927: 926: 925: 924: 921: 917: 913: 908: 907: 906: 903: 901: 889: 885: 881: 877: 873: 869: 854: 850: 846: 841: 840: 839: 838: 837: 836: 835: 834: 833: 832: 831: 830: 819: 816: 814: 802: 798: 794: 789: 785: 784: 783: 782: 781: 780: 779: 778: 777: 776: 764: 760: 756: 752: 746: 742: 738: 735: 734: 733: 732: 731: 730: 729: 728: 727: 726: 717: 713: 709: 705: 701: 697: 696: 695: 694: 693: 692: 691: 690: 677: 673: 669: 664: 660: 659: 658: 657: 656: 655: 654: 653: 652: 651: 650: 649: 638: 635: 633: 621: 620: 619: 618: 617: 616: 615: 614: 613: 612: 603: 599: 595: 591: 587: 583: 582: 581: 580: 579: 578: 577: 576: 569: 566: 564: 552: 548: 544: 540: 536: 532: 531: 530: 529: 528: 527: 522: 518: 514: 510: 507: 506: 505: 504: 501: 498: 496: 484: 480: 475: 471: 467: 463: 459: 455: 452: 451: 448: 444: 440: 435: 434: 433: 431: 430:biopsychology 427: 423: 419: 415: 408: 404: 400: 396: 391: 388: 384: 380: 376: 371: 370: 369: 368: 364: 360: 351: 348: 347: 344: 341: 340: 336: 332: 331:131.114.88.73 328: 325: 322: 317: 314: 313: 307: 303: 300: 297: 293: 289: 285: 282: 279: 276: 273: 270: 267: 264: 261: 257: 254: 253:Find sources: 249: 245: 241: 238: 232: 228: 224: 220: 215: 211: 206: 202: 198: 194: 190: 189: 180: 176: 168: 164: 158: 152: 148: 142: 136: 132: 126: 122: 118: 116: 112: 106: 102: 101: 96: 92: 88: 87: 82: 79: 75: 74: 69: 66: 64: 63: 59: 55: 51: 44: 42: 38: 33: 27: 26: 19: 3755: 3752: 3735:Dennis Brown 3734: 3727: 3666: 3626: 3588: 3577: 3573: 3566: 3558:are strange. 3498:. Should be 3487: 3471: 3450: 3427:extropianism 3423:Strong Keep. 3422: 3390: 3386: 3382: 3309: 3290: 3266: 3261: 3206: 3185: 3184: 3167: 3122: 3059: 3009: 3007: 2999: 2919:User:Silence 2892:User:Silence 2869: 2865: 2861: 2855: 2851: 2848:your opinion 2847: 2845: 2813: 2807: 2805: 2781: 2769: 2766: 2759:your opinion 2758: 2754: 2725:93.147.153.8 2650:— Preceding 2645: 2606: 2600: 2596: 2592: 2577: 2537: 2519: 2514: 2508: 2427: 2377: 2367: 2365: 2357: 2335: 2329: 2325: 2274: 2260: 2256: 2210: 2202: 2179: 2175: 2169: 2083: 2074: 2037: 1968: 1940: 1936: 1935:Agreed, and 1916: 1915: 1890:deletion. -- 1813: 1794: 1789: 1761: 1728:Davidcpearce 1716: 1695: 1691: 1658:— Preceding 1602: 1557:— Preceding 1521:Davidcpearce 1515: 1450:Davidcpearce 1419:meat puppets 1392:Davidcpearce 1371: 1370: 1348: 1328:Davidcpearce 1263:Davidcpearce 1234: 1219:Davidcpearce 1216: 1211: 1173: 1171: 1126:Davidcpearce 975:Davidcpearce 912:Davidcpearce 875: 871: 867: 845:Davidcpearce 749:— Preceding 708:Davidcpearce 703: 662: 513:Davidcpearce 485:deletion. -- 482: 473: 453: 439:Davidcpearce 413: 412: 359:Davidcpearce 356: 350:Strong Keep. 349: 337: 326: 315: 301: 295: 287: 280: 274: 268: 262: 252: 239: 178: 166: 157:sockpuppetry 150: 139:; suspected 134: 120: 108: 104: 98: 90: 84: 49: 47: 31: 28: 3349:WP:BLUDGEON 3064:Friendly AI 2625:Friendly AI 2360:, in fact, 2125:as a whole. 2023:Mark viking 2011:WP:PRESERVE 278:free images 3683:Lightbound 3617:WP:MASKing 3532:Lightbound 3524:WP:FACTORS 3399:Lightbound 3322:Lightbound 3215:Lightbound 3131:Lightbound 3029:Lightbound 2936:Lightbound 2900:Lightbound 2852:assessment 2822:Lightbound 2757:. That is 2597:noteworthy 2550:Lightbound 2534:ad hominem 2493:Lightbound 2391:Lightbound 2303:Lightbound 2239:Lightbound 2152:Lightbound 2059:Lightbound 1953:Lightbound 1896:Lightbound 1822:Lightbound 1770:Lightbound 1696:highschool 1611:Lightbound 1485:Lightbound 1427:Lightbound 1297:Lightbound 1243:Lightbound 1188:Lightbound 1092:Lightbound 1037:Lightbound 940:Lightbound 896:Lightbound 884:WP:NOTABLE 809:Lightbound 797:conjecture 628:Lightbound 586:Lightbound 559:Lightbound 491:Lightbound 462:WP:NOTABLE 91:discussion 3761:talk page 3730:talk page 3671:listed as 3622:WP:SCICON 3514:as early 3504:WP:SCICON 3500:WP:IMPACT 3231:WP:FRINGE 3190:WP:FRINGE 3016:WP:FRINGE 2846:"That is 2763:WP:WEASEL 2573:straw man 2510:straw man 2465:Strong AI 2110:neologism 1941:consensus 1937:thank you 1700:Hipocrite 1415:WP:CANVAS 399:• Gene93k 379:• Gene93k 147:canvassed 141:canvassed 100:consensus 37:talk page 3763:or in a 3653:. To be 3615:This is 3486::, i.e. 3451:Comment: 3186:Comment: 3123:deletion 2798:director 2652:unsigned 2646:Comment: 2473:WP:UNDUE 2446:numerous 2214:itsJamie 1917:Comment: 1814:Comment: 1754:websites 1717:Comment: 1660:unsigned 1559:unsigned 1372:Comment: 1212:Comment: 1017:cracking 876:directly 868:Comment: 751:unsigned 543:WP:UNDUE 479:WP:TRUTH 470:WP:TRUTH 466:WP:TRUTH 414:Comment: 237:View log 179:username 173:{{subst: 167:username 161:{{subst: 151:username 145:{{subst: 135:username 129:{{subst: 39:or in a 3702:Silence 3643:authors 3597:Silence 3593:WP:AKON 3584:WP:AKON 3520:WP:AKON 3472:Comment 3314:WP:BAIT 3271:Silence 3244:Marxism 3229:"Not a 3172:Silence 3093:Silence 3078:Marxism 2975:Silence 2970:WP:PSCI 2927:WP:PSCI 2877:Silence 2808:one day 2778:WP:PSCI 2694:Silence 2629:Silence 2481:WP:PSCI 2231:WP:PSCI 2130:WP:PSCI 1760:and is 1071:WP:NPOV 1025:WP:NPOV 743:and/or 668:Silence 594:Silence 551:WP:PSCI 547:WP:PSCI 539:WP:PSCI 353:volume) 327:Comment 284:WP refs 272:scholar 210:protect 205:history 143:users: 3556:WP:E=N 3484:WP:E=N 3207:delete 3188:Not a 2477:WP:NEO 2453:theory 2449:places 2423:WP:VER 2374:theory 2269:WP:VER 2261:theory 2176:Delete 2139:WP:NEO 2119:WP:NEO 2106:theory 2050:WP:NEO 2046:WP:NEO 2042:theory 2038:Delete 2015:WP:ATD 2003:WP:GNG 1887:WP:COI 1882:WP:COI 1650:WP:NEO 1553:WP:COI 1516:critic 1477:WP:COI 1410:WP:COI 1349:Delete 1285:WP:COI 1160:WP:COI 1079:WP:COI 1075:WP:COI 931:WP:COI 888:WP:POV 535:WP:COI 454:Delete 316:Delete 256:Google 214:delete 54:Tawker 3552:WP:1R 3480:WP:1R 3383:years 3149:WP:OR 3115:WP:OR 2790:WP:OR 2542:WP:IS 2475:to a 2358:it is 2356:Yes, 2289:WP:OR 2205:with 2203:Merge 2180:Merge 2178:, or 2143:WP:OR 1801:talk 1722:(cf. 1289:WP:OR 1029:WP:OR 874:that 458:WP:OR 299:JSTOR 260:books 244:Stats 231:views 223:watch 219:links 121:Note: 16:< 3706:talk 3649:and 3639:here 3637:and 3635:here 3601:talk 3567:must 3554:and 3502:and 3482:and 3459:talk 3429:and 3357:talk 3275:talk 3209:. -- 3176:talk 3125:. -- 3097:talk 2979:talk 2881:talk 2729:talk 2698:talk 2660:talk 2633:talk 2593:Keep 2487:. -- 2326:part 2257:Keep 2212:OhNo 2192:talk 2141:and 2092:talk 2027:talk 2013:and 1969:Keep 1926:talk 1860:talk 1790:Keep 1764:. -- 1732:talk 1726:) -- 1704:talk 1668:talk 1567:talk 1525:talk 1454:talk 1396:talk 1381:talk 1362:talk 1332:talk 1267:talk 1237:. -- 1223:talk 1130:talk 1073:and 1019:and 1011:and 979:talk 916:talk 849:talk 803:. -- 759:talk 741:MIRI 712:talk 672:talk 598:talk 584:Hm? 545:and 517:talk 443:talk 403:talk 383:talk 363:talk 335:ansh 292:FENS 266:news 227:logs 201:talk 197:edit 58:talk 50:keep 3745:WER 3732:by 3667:(2) 3627:(1) 3478:is 3439:JFG 3353:BMK 3168:the 2330:the 1796:DGG 339:666 306:TWL 235:– ( 175:csp 171:or 163:csm 131:spa 105:not 3748:) 3742:| 3740:2¢ 3738:| 3708:) 3603:) 3589:if 3461:) 3359:) 3277:) 3178:) 3158:, 3099:) 2981:) 2963:OR 2930:-- 2883:) 2765:. 2731:) 2700:) 2662:) 2635:) 2378:ly 2194:) 2174:, 2094:) 2029:) 1928:) 1862:) 1803:) 1734:) 1706:) 1670:) 1605:-- 1569:) 1527:) 1519:-- 1456:) 1398:) 1383:) 1364:) 1334:) 1269:) 1225:) 1170:: 1132:) 1124:-- 981:) 918:) 910:-- 851:) 843:-- 761:) 714:) 674:) 600:) 519:) 445:) 437:-- 405:) 397:. 385:) 377:. 365:) 333:. 286:) 229:| 225:| 221:| 217:| 212:| 208:| 203:| 199:| 181:}} 169:}} 159:: 153:}} 137:}} 127:: 60:) 3704:( 3686:☯ 3680:☯ 3599:( 3535:☯ 3529:☯ 3457:( 3402:☯ 3396:☯ 3355:( 3325:☯ 3319:☯ 3273:( 3218:☯ 3212:☯ 3174:( 3134:☯ 3128:☯ 3095:( 3084:. 3058:" 3032:☯ 3026:☯ 2977:( 2939:☯ 2933:☯ 2903:☯ 2897:☯ 2879:( 2825:☯ 2819:☯ 2727:( 2696:( 2658:( 2631:( 2553:☯ 2547:☯ 2496:☯ 2490:☯ 2394:☯ 2388:☯ 2306:☯ 2300:☯ 2242:☯ 2236:☯ 2190:( 2155:☯ 2149:☯ 2090:( 2062:☯ 2056:☯ 2025:( 1956:☯ 1950:☯ 1924:( 1899:☯ 1893:☯ 1858:( 1825:☯ 1819:☯ 1799:( 1773:☯ 1767:☯ 1730:( 1702:( 1666:( 1614:☯ 1608:☯ 1565:( 1523:( 1488:☯ 1482:☯ 1452:( 1430:☯ 1424:☯ 1394:( 1379:( 1360:( 1330:( 1300:☯ 1294:☯ 1265:( 1246:☯ 1240:☯ 1221:( 1191:☯ 1185:☯ 1128:( 1095:☯ 1089:☯ 1040:☯ 1034:☯ 977:( 943:☯ 937:☯ 914:( 899:☯ 893:☯ 847:( 812:☯ 806:☯ 757:( 710:( 670:( 666:- 631:☯ 625:☯ 596:( 562:☯ 556:☯ 515:( 494:☯ 488:☯ 441:( 401:( 381:( 361:( 310:) 302:· 296:· 288:· 281:· 275:· 269:· 263:· 258:( 250:( 247:) 240:· 233:) 195:( 183:. 177:| 165:| 149:| 133:| 56:(

Index

Knowledge (XXG):Articles for deletion
talk page
deletion review
Tawker
talk
06:58, 5 April 2014 (UTC)
Friendly artificial intelligence
Not a vote
not a majority vote
policies and guidelines
consensus
assume good faith
sign your posts
single-purpose accounts
spa
canvassed
canvassed
sockpuppetry
csm
csp
Friendly artificial intelligence
edit
talk
history
protect
delete
links
watch
logs
views

Text is available under the Creative Commons Attribution-ShareAlike License. Additional terms may apply.