Jump to content

Chat GPT for building regulations - insane!


GaryChaplin

Recommended Posts

16 minutes ago, Alan Ambrose said:

possess knowledge

Encyclopaedias posses knowledge.  Possess and Knowledge need to be defined in the right context.

18 minutes ago, Alan Ambrose said:

human knowledge

Like arseholes, everyone has an opinion on knowledge, some are right.

18 minutes ago, Alan Ambrose said:

explain how it came to its conclusions

I think they can already.

19 minutes ago, Alan Ambrose said:

well-read parrot

All Barristers do, and many other professionals.

Link to comment
Share on other sites

The ‘trick’ will be when it can take the knowledge it has and come up with unique insights to problems. To some extent it can already do some of this in specific situations. However, human ingenuity needs imagination and insight which is hard to replicate in AI. 

Link to comment
Share on other sites

1 hour ago, Kelvin said:

However, human ingenuity needs imagination and insight which is hard to replicate in AI. 

AI is being used in the bio-chemistry filed for protein folding and misfolded protein detection.  It can do this without prior knowledge.

So no imagination or insight is needed.

It could be said that insight is holding back the bio-chemistry field.

 

While doing my ResM I got marked down on one bit of work because I did not reference some equations.  As these were very basic energy equations, that are taught at school, I pointed out that I did not consider it was really necessary to reference them for a Post Graduate audience.

What had happened is that one of my supervisors had go so used to marking and checking work in a formulaic manner, he had lost the context of the essay.

It should also be compulsory the disagree with your supervisor at Post Grad level, it is not moving the area of study on if you don't.  What defending your thesis is all about after all.

Link to comment
Share on other sites

It's an interesting field. I once got in a big debate with a bunch of religious fundamentalists (pointless I know) who insisted that god must exist because how else could intelligence exist. I pointed out that, at the time, Genetic Algorithms were starting to be used to produce 'intelligent designs' by iterative techniques. They argued that it took a human intelligence to design the GA therefore anything produced by it was a product of human thought. But if the product is novel - something never before considered by a human thinker, it demonstrates that designs can appear to be derived intelligently but at the same time be decoupled from thought or intelligent consideration. It totally satisfied me anyway.

  • Like 1
Link to comment
Share on other sites

9 hours ago, Alan Ambrose said:

>>> Sure but it’s not building a database live like the search engines. It’s why it’s knowledge is limited beyond 2021. Also it’s not returning a series of hits based on your search request. It’s trying to interpret what you ask it and reply with an answer it’s derived from your question based on the knowledge it has. It’s why you can prompt it with a few parameters and it’ll respond linking them together albeit it’s still a bit limited. 

 

@Kelvin I think we both have a similar understanding - and just like Google search, the answers are often useful even if we're fairly sure they don't tell the whole story. The interesting questions to me are:

 

+ will it ever get to 'expert' (or even 'competent') level in particular subject areas or is it destined to always produce 'general internet standard' knowledge?

+ does it actually (appear to the average user to) 'possess knowledge', or does it just appear to be a kind of well-read parrot?

+ does that actually differ substantially from 'human knowledge' or are they somewhat the same thing?

+ will it (and AIs in general) ever be able to explain how it came to its conclusions?

 

I think the answers to these questions are: you ain’t seen nothing yet. Give it another 10 to 20 years and AI will blow your socks off. 

Link to comment
Share on other sites

1 hour ago, Adrian Walker said:

IMHO it won’t be anything like 10 or 20 years, AI is developing at an impressive rate already. 

Yes - I am immensely enthusiastic about what this technology will do while being profoundly concerned about our inability to get a grip on the scope for misuse, lack of attention to wider social impacts and dim headlights in our education systems about its likely impact and hence future prospects for mere humans. 

Link to comment
Share on other sites

3 minutes ago, MikeSharp01 said:

lack of attention to wider social impacts and dim headlights in our education systems about its likely impact and hence future prospects for mere humans. 

We have had AI since 1984, not been a problem for most.

 

 

Link to comment
Share on other sites

21 minutes ago, SteamyTea said:

We have had AI since 1984, not been a problem for most.

Not like this we haven't and the 'most' in your comment is telling. The lack of understanding of the likely impacts, Chat GPT being perhaps the most marker of this, is staggering. The dystopian futures seen in movies such as you cite may not be the future reality but change is coming and we need to get a grip, somewhat, on its direction and assess the needs of the people in the light of it. For education, and in many dimensions of education, this is like the calculator debate gone universe scale. Questions like; what is the point of current education paradigms when data, information, knowledge and wisdom (with associated links of increasing understanding by the machine) are all available at the press of a button. I am definitely not a doom sayer or against the application of AI but I am against missing the trick and inappropriate application in same way I am generally against eugenics even though we have the capability for it.

Link to comment
Share on other sites

6 minutes ago, MikeSharp01 said:

For education, and in many dimensions of education, this is like the calculator debate gone universe scale

This is going to be a problem, but as you know, it is asking the right questions that is the hard part, not the answers received.

I also wonder how filtered the information that these AI units get.  Are they subscribed to all the paywalled journals, and the academic networks.

Had a debate with an old work colleague a while back about doing a self study Masters.  His valid reply was that as a member of the great unwashed, there is lack of access to the relevant academic journals.  I can see this as a problem, just still not sure if it would make any real difference to subject understanding, though it does run the risk of duplicate work.

Having said that, there is a lot to be learnt from repeating existing work, especially in the sciences, where it is almost compulsory to do so.

Link to comment
Share on other sites

2 hours ago, SteamyTea said:

Having said that, there is a lot to be learnt from repeating existing work, especially in the sciences, where it is almost compulsory to do so.

Yes because this gives us the chance to review the untraveled roads that may have just been passed by on the way of doing other things with slightly different foci.

Link to comment
Share on other sites

There’s an interesting case going through the courts just now where Stephen Thaler has applied for two patents citing the AI as the sole inventor. The case has been thrown out on legal grounds as only a human can be the inventor. Independent analysis of the patents has also said that the AI wasn’t the sole inventor. 
 

If you gave a neural network the space and information to invent a better mousetrap based on all the mousetraps currently available, a definition of the problem, and all the information about mice it needed could it come up with a better mousetrap? However let’s say we lived in a mousetrapless world and gave the AI the same information would it invent the mousetrap? 

Link to comment
Share on other sites

1 hour ago, Kelvin said:

There’s an interesting case going through the courts just now where Stephen Thaler has applied for two patents citing the AI as the sole inventor. The case has been thrown out on legal grounds as only a human can be the inventor. Independent analysis of the patents has also said that the AI wasn’t the sole inventor. 
 

If you gave a neural network the space and information to invent a better mousetrap based on all the mousetraps currently available, a definition of the problem, and all the information about mice it needed could it come up with a better mousetrap? However let’s say we lived in a mousetrapless world and gave the AI the same information would it invent the mousetrap? 

For now the human is the inventor and the writer of the AI should perhaps get the credit but also when it does wrong who takes the fall. This is exactly the problem the insurance world is looking at for driver less / self driving cars.

  • Like 1
Link to comment
Share on other sites

12 minutes ago, MikeSharp01 said:

For now the human is the inventor and the writer of the AI should perhaps get the credit but also when it does wrong who takes the fall. This is exactly the problem the insurance world is looking at for driver less / self driving cars.

Generally, it is the operator that carries the can, so in the case of AI, the person who asks it do to something.

'Shooting over Devon' can have many meanings.

And some interesting search results.

Link to comment
Share on other sites

I don’t really see what the problem the insurance companies are grappling with. While cars still have controls and require an operator to sit in the driver’s’ then the operator is responsible regardless if the car can be fully self-droving. Once you remove all the controls so that there is no operator and the car is truly self-driving then it’s the car maker (probably) 

Link to comment
Share on other sites

2 hours ago, Kelvin said:

There’s an interesting case going through the courts just now where Stephen Thaler has applied for two patents citing the AI as the sole inventor. The case has been thrown out on legal grounds as only a human can be the inventor. Independent analysis of the patents has also said that the AI wasn’t the sole inventor. 
 

If you gave a neural network the space and information to invent a better mousetrap based on all the mousetraps currently available, a definition of the problem, and all the information about mice it needed could it come up with a better mousetrap? However let’s say we lived in a mousetrapless world and gave the AI the same information would it invent the mousetrap? 

This is a fake dichotomy just like the Blake Lemoine sentient AI debacle.

Even if an AI is 10x more intelligent and superior at solving a given problem than a human, that does not automatically impart human civil laws, rights and responsibilities onto it. 

An easy test is, if a super human intelligence alien landed in the UK, would they automatically be entitled to a driving licence? Sit university exams? Claim state benefits?

Our laws of civilisation broadly exist to further the project of humanity, in a given country, on earth; we are not obliged to automatically give those rights to other species (super human aliens or robots) based purely on an excellent IQ test results. This is a ethical and legal debate just starting that continue long after we're all gone.

 

Edited by joth
  • Like 1
Link to comment
Share on other sites

That’s basically the legal position at the moment where the courts have said only a human can be granted a patent. 
 

My mousetrap question was more about the level of invention comparing the invention of something that’s just a better version of a thing that already exists versus a completely new unique something that didn’t exist before not the moral ethics of whether the AI gains the same level of recognition of a human. 
 

My understanding of AI, as it currently stands, is that the first scenario is where we are at the moment and the second scenario not quite yet. There is a third scenario where the AI identifies the problem itself and invents a solution for it with no human intervention at all. The final scenario is it subsequently manufacturers the thing it invented. 

Link to comment
Share on other sites

8 hours ago, SteamyTea said:

That will be elliptical paths then.

More usually, and perhaps less laconically, the ones you start going down but conclude there is no future so turn back, taking experience back with you so no loss, but perhaps unaware of the opportunities that lay just around the bend. So, when later, in more knowledgeable times and with greater experience returning to the point of turning around and looking ahead from there is often fruitful. 

Link to comment
Share on other sites

5 hours ago, joth said:

An easy test is, if a super human intelligence alien landed in the UK, would they automatically be entitled to a driving licence? Sit university exams? Claim state benefits?

And the answer is YES, because you said they were super 'Human', but in the case of the driving licence only after 6 months of residency IIRC, the case of sit University exams only after paying the fees and taking at least one term / semester of courses and in the case of state benefits only after their asylum claim had been assessed. :D

Link to comment
Share on other sites

I went to Japan in 2014 and at the Tokyo science museum they had a life size humanoid robot (slightly smaller than your typical western male) called Asimo, made by Honda. It used to be larger sized but they had to shrink it because people found it’s size somewhat scary.

 

It could walk, jump and even hop on one leg. You could argue that this is not artificial intelligence, and just a marriage of mechanics and robotics, but it was bloody clever. It’s only after watching a child go through the stages of learning how to walk, that you realise how bloody amazing the human body is and how difficult it is to replicate human motions robotically. Walking is affectively akin to controlled falling, as the body shifts from one leg to the other. 

 

Anyway, they keep updating its capabilities. It was bloody impressive in 2014, and even more so now:

 

 

Edited by Adsibob
Link to comment
Share on other sites

7 minutes ago, Temp said:

https://www.dailymail.co.uk/sciencetech/article-11648041/Woman-decides-divorce-husband-lover-AI-bot-ChatGPT-TOLD-to.html

 

Woman, 37, decides to divorce her husband and move in with her lover - because AI bot ChatGPT TOLD her to..

is that true, it is a link to the Daily Mail.

Or did she really just prefer her internet connected vibrator.

Link to comment
Share on other sites

23 hours ago, Temp said:

https://www.dailymail.co.uk/sciencetech/article-11648041/Woman-decides-divorce-husband-lover-AI-bot-ChatGPT-TOLD-to.html

 

Woman, 37, decides to divorce her husband and move in with her lover - because AI bot ChatGPT TOLD her to..

To be fair I probably trust ChatGPTs sources more than a horoscope writer's, and people make pretty major decisions off that. 

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...