In a recent blog post, he said that a “human level AI” may be a useful goal to aim for, where AI can learn jobs as needed like a human would, but that we aren’t there yet. Yann LeCun, chief AI scientist at Facebook’s owner Meta, says there is no such thing because even humans are specialised. Other proposed tests are sending an AI to university and seeing if it can pass a degree, or testing whether it can carry out real-world jobs successfully. Alan Turing famously suggested that an AI should have to pass as human in a text conversation, while Steve Wozniak, co-founder of Apple, has said he will consider AGI to be real if it can enter a random house and figure out how to make a cup of coffee. Various tests have been proposed that would grant an AI the status of AGI, although there is no universally accepted definition. How will we know if AGI has been achieved? “But roughly, we mean systems that can flexibly, resourcefully solve problems that they haven’t seen before, and do so in a reliable way.” “It’s not a single magical thing,” he says. Gary Marcus at US software firm Robust.AI says the term is shorthand. AGI is a term used for a model that can learn any intellectual task that a human being can.
0 Comments
Leave a Reply. |