![]() It’s just as bad in the EU, where lawmakers are currently stymied over numerous sticking points including facial recognition regulations, with conservative and liberal party lines fueling the dissonance. Richards provided no evidence for this claim and admitted that he doesn’t understand how the pinch-to-zoom feature works, but the judge decided the burden was on the prosecution to prove that zooming in doesn’t add new images into the video.Īnd the US government remains staunch in its continuing hands-off approach to AI regulation. Schroeder prevented … from pinching and zooming after Rittenhouse’s defense attorney Mark Richards claimed that when a user zooms in on a video, “Apple’s iPad programming creat what it thinks is there, not what necessarily is there.” Per an article by Ars Technica’s Jon Brodkin: Last year, Judge Bruce Schroeder banned prosecutors from using the “pinch to zoom” feature of an Apple iPad in the Kyle Rittenhouse trial because nobody in the courtroom properly understood how it worked. In the US, the legal system consistently demonstrates an absolute failure to grasp even the most basic concepts related to artificial intelligence. ![]() But the AI powering it is no more sophisticated than the machine learning algorithms Netflix uses to try and figure out what TV show you’ll want to watch next. From an engineering point of view, the machine is quite impressive. It’s an animatronic puppet that uses natural language processing AI to generate phrases. ![]() ![]() Let’s be perfectly clear here: if Sophia the Robot is sentient, so is Amazon’s Alexa, Teddy Ruxpin, and The Rockafire Explosion. The problem is that there’s only one country with an existing legal framework by which the rights of a sentient machine can be discussed, and that’s Saudi Arabia.Ī robot called Sophia, made by Hong Kong company Hanson Robotics, was given citizenship during an investment event where plans to build a supercity full of robotic technology were unveiled to a crowd of wealthy attendees. In which case we’d need to turn to the legal system in order to codify and verify any potential incidents of machine consciousness. Perhaps a machine is only sentient if it can meet a simple set of rational qualifications for sentience. If we can’t rely on OpenAI’s chief scientist to determine whether, for example, GPT-3 can think, then we’ll have to shift perspectives. It would appear that computer scientists are no more qualified to opine on machine sentience than philosophers are. We’ve written about Twitter beefs and wacky arguments between AI experts for years. Here we have three of the world’s most famous computer scientists, each of them progenitors of modern artificial intelligence in their own right, debating consciousness on Twitter with the temerity and gravitas of a Star Wars versus Star Trek argument.Īnd this is not an isolated incident by any means. Here, my faithful guideline is: faking it is having it, because it is practically impossible to fake w/o having. I think you would need a particular kind of macro-architecture that none of the current networks possess.Īnd Judea Pearl, a Turing Award-winning computer scientist, thinks even fake sentience should be considered consciousness since, as he puts it, “faking it is having it.”Īs far as I know we do not have an agreed Turing test for consciousness, except, of course, systems that act and communicate as though they have consciousness. Not even for true for small values of "slightly conscious" and large values of "large neural nets". It may be that today's large neural networks are slightly consciousīut Yann LeCun, Facebook/Meta’s AI guru, believes the opposite: Ilya Sutskever, the chief scientist at OpenAI, seems to think AI is already sentient: But they usually stop short of claiming these systems are capable of experiencing thoughts and feelings. They use terms such as “human-level” and “strong AI” to indicate they’re working towards something that imitates human intelligence. Currently, the companies dabbling at the edge of artificial general intelligence (AGI) have wisely stayed on the border of “it’s just a machine” without crossing into the land of “it can think.” There’s just one thing stopping them: the truth.Īnd that barrier’s only as strong as the consequences for breaking it. Sentience and scientistsĪny developer, marketing team, CEO, or scientist can claim they’ve created a machine that thinks and feels. The closest we have to a test for sentience is the Turing Test and, arguably, Alexa and Siri passed that years ago.Īt some point, if and when AI does become sentient, we’ll need an empirical method for determining the difference between clever programming and machines that are actually self-aware.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |