Jump to content

What should I consider?


Recommended Posts

On 06/01/2021 at 16:44, TonyT said:

Network cable to TV location

Network cable to PC/Xbox location.

have Alexa sitting on top of the microwave in the kitchen when I want to play music while cooking.

 

that’s my smart technology.

 

 

 

Given we're both electronic engineers by education and the other half still works in the semiconductor industry, our house (completed 2016) is remarkably luddite by design.

 

We did put cat6 in every room and that has served us well in hooking up static things like consoles and TVs. 

 

We've recently branded into the world of smart plugs which was handy for asking Alexa to turn on / off the xmas lights. 

 

Also bought some battery operated wireless blinds from Ikea for the rear sliders (external blinds were not a viable option) and they're working a treat - come up automatically on sunrise and we just manually (or ask Alexa) to put them down in evening.

Link to comment
Share on other sites

10 minutes ago, Adrian Walker said:

I would put CO2 monitoring on your list (near the top IMHO).  This will tell/confirm that you have a healthy environment.

Mandatory in Scotland, has to be bedroom.

  • Like 1
Link to comment
Share on other sites

12 minutes ago, JohnMo said:
23 minutes ago, Adrian Walker said:

I would put CO2 monitoring on your list (near the top IMHO).  This will tell/confirm that you have a healthy environment.

Mandatory in Scotland, has to be bedroom.

CO2 of CO monitors?

Link to comment
Share on other sites

3 hours ago, JohnMo said:

MVHR, once commissioned, don't touch except for maintenance, if the toilet smells hit the boost switch, same for showering or in bath.  Humidity controls work, but not reliably in our climate. No point in automation.

My MVHR works surprisingly well when the humidity gets too high i.e. shower/bath and automatically boosts. If it didn't or was too slow it has a bypass to trigger boost that could be linked to a sensor in the bathroom ( I have Aeon multi sensors - which record a range of functions e.g. humidity, light etc. etc ).

Automation is good and does work ( though not 100% of course! ). BUT!; if you are not technical or it's just "not your thing" then yes keep it as simple as possible.

Link to comment
Share on other sites

On 03/01/2021 at 19:29, BartW said:

- smart lighting for LED / lighting schemes / ambient / likes of Lutron / Creston / Fibaro and likes

- external CCTV

- internal security (quite fixed and NOT open to integration, as it is Verisure, but comes with a handy app that allows a degree of controllability)

- smart heating / like NEST - although quite limited as for the integration with other things smart

- MVHR / ASHP?

- audio in the house / I have done this on a couple of properties so far, but looking for an adaptable system. Been using ceiling speakers and central AV for Main Zone + Zone 2, e.g. Yamaha RX-V677. Whilst I appreciate Sonos gives the Multiroom option, perhaps there is something better?
- Visual in the house. It will be a 4 bed family home, with one central location for TV in the open plan living room. I appreciate this is enough for the two of us (so far) for now, but it may change. I want flexibility NOT to be made to watch CBeebies all day, or in the evening. So, I would gather that all Bedrooms should at least be pre-wired for basic Freeview 

- smart blinds / I always loved the idea of them going up when the Sun rises, and going private in the evening. It may be a big ask (financially for sure), but happy to settle on just electric blinds, perhaps controlled via an app?

- smoke and heat sensors from Nest? Just to give the added layer of peace of mind?

This reads like a shopping list of fun and frustration too me! 

 

Lighting - I use qubino wave din 

 

CCTV - BlueIris on PC seems like the dogs!

 

Internal security. You mean like face recognition to stop SWMBO entering man cave. Or some video analysis of who opened your wine? . Yes; all this needs to be done.

 

Heating. Again I link fibaro z wave relays to my homeseer hub

 

MVHR/ASHP - BOTH!

 

audio in house. Raspberry pi with max2play - one running it is set and forget. Until you fiddle with it and break it....

 

Visuals. Depends big FO TV in lounge. TVBed in bedroom with PS5 - for when extra bedroom fun is required

 

Smart blinds - I didn't bother with. Only have 1 blind in the build so couldn't be arsed to automate it

 

Smoke/Heat though I haven't installed will be a separate system. Not keen on automating say with Homeseer. Windows does an update, so homeseer doesn't boot. Then there's a fire and I burn to death because "Windows has to do an update ". Maybe overkill (no pun intended )

 

Other things you WANT

 

RoboRock S7 with auto empty dust collector ( have to get that direct from China for the moment )

 

FrontDoor with finger scanner - for when you are too lazy to get the key out. Works very well ( bit of a dog when your finger is wet ). Also SWMBO 'suffers' using it. The complete inability to consistently move 1 finger at a constant rate across a sensor alludes her much to my pleasure.

 

Video doorbell/intercom. I use Doorbird because it's open API and can therefore interface with anything. Currently when my doorbell rings we get the death tune from C64 Master of magic playing which SWMBO hates with an absolute passion. Notification sent to my phone and any active iPads in the house. Then I can talk to fool at my door or indeed unlock it and let them in .

 

Robot grass cutter - I will if we have any grass!

 

Geo-fencing off phone or air tag or similar

 

EV charger - you know you need to do this.

 

So much fun. So much money to spend. So much to go wrong. But the joy!

  • Haha 1
Link to comment
Share on other sites

On 03/01/2021 at 19:29, BartW said:

- smart lighting for LED / lighting schemes / ambient / likes of Lutron / Creston / Fibaro and likes

- external CCTV

- internal security (quite fixed and NOT open to integration, as it is Verisure, but comes with a handy app that allows a degree of controllability)

- smart heating / like NEST - although quite limited as for the integration with other things smart

- MVHR / ASHP?

- audio in the house / I have done this on a couple of properties so far, but looking for an adaptable system. Been using ceiling speakers and central AV for Main Zone + Zone 2, e.g. Yamaha RX-V677. Whilst I appreciate Sonos gives the Multiroom option, perhaps there is something better?
- Visual in the house. It will be a 4 bed family home, with one central location for TV in the open plan living room. I appreciate this is enough for the two of us (so far) for now, but it may change. I want flexibility NOT to be made to watch CBeebies all day, or in the evening. So, I would gather that all Bedrooms should at least be pre-wired for basic Freeview 

- smart blinds / I always loved the idea of them going up when the Sun rises, and going private in the evening. It may be a big ask (financially for sure), but happy to settle on just electric blinds, perhaps controlled via an app?

- smoke and heat sensors from Nest? Just to give the added layer of peace of mind?

Oh yes.

 

For the love of God take all my money to have this in my house!

 

 

Link to comment
Share on other sites

51 minutes ago, pocster said:

For the love of God take all my money to have this in my house!

I am not usually bothered by this sort of thing.

But with those two I am.

 

Freaky feeling: Why androids make us uneasy

We're often creeped out by human-like robots or animated characters, but what they do to our minds is more complex than you might think

 
LIFE 9 January 2013

By Joe Kloc

 

New Scientist Default Image

Too close for comfort

(Image: Timothy Archibald)

 

See more in our gallery: Uncanny android sightings, from Freud to Hollywood

EIGHT years ago, Karl MacDorman was working late at Osaka University in Japan when, around 1 am, his fax machine sputtered into life. Out came a 35-year-old essay, written in Japanese, sent by a colleague.

It was an intriguing read for MacDorman, who was building hyperrealistic androids at the time. It warned that when artificial beings have a close human likeness, people will be repulsed. He and his colleagues worked up a quick English translation, dubbing the phenomenon the “uncanny valley” (see diagram).

 

They assumed their rough draft of this obscure essay would only circulate among roboticists, but it caught the popular imagination. Journalists used the uncanny valley to explain the lacklustre box office performance of movies like Polar Express, in which audiences were creeped out by the computer-generated stars. It was also blamed for the failure of humanoid robots to catch on. Finding an explanation for why the uncanny valley occurs, it seemed, would be worth millions of dollars to Hollywood and the robotics industry. Yet when researchers began to study the phenomenon, citing MacDorman’s translation as the definitive text, answers eluded them.

MacDorman now believes we have been looking at the uncanny valley too simplistically, and he partly blames his own rushed translation. He and others are converging on an explanation for what’s actually going on in the brain when you get that uncanny feeling. If correct, the phenomenon is more complex than anyone realised, encompassing not only our relationship with new technologies but also with each other.

While it’s well known that abnormal facial and body features can make people shun others, some researchers believe that human-like creations unnerve us in a specific way. The essay that MacDorman read was published in 1970 by roboticist Masahiro Mori. Entitled “Bukimi No Tani” – or The Valley of Eeriness – it proposed that humanoid robots can provoke a uniquely uncomfortable emotion that their mechanical cousins do not.

 

For decades, few outside Japan were aware of Mori’s theory. After MacDorman’s translation brought it to wider attention, his ideas were extended to computer-generated human figures, and research began in earnest into the uncanny valley’s possible causes.

MacDorman’s first paper on the subject examined an idea proposed by Mori: that we feel uncomfortable because almost-human robots appear dead, and thus remind us of our own mortality. To test this, MacDorman used something called terror management theory. This suggests that reminders of death govern much of our behaviour – including making us cling more strongly to aspects of our own world view, such as religious belief.

So MacDorman asked volunteers to fill in a questionnaire about their world views after showing them photos of human-like robots. Sure enough, those who had seen the robots were more defensive of their view of the world than those who had not, hinting that the robots were reminding people of death.

This explanation makes intuitive sense, given that some animated characters and robots appear corpse-like. But even at the time it was clear to MacDorman that the theory had its limits: reminding someone of their own demise does not, on its own, elicit the uncanny response people describe. A gravestone reminds us of death, for example, but it doesn’t make us feel the same specific emotion.

Competing theories soon emerged. Some researchers blamed our evolutionary roots; we have always been primed to shun unattractive mates, after all. Others blamed the established idea that we evolved feelings of disgust to protect us from pathogens. Christian Keysers of the University of Groningen in the Netherlands pointed out that irregularities in an almost-human form make it look sick. Since uncanny robots look very similar to us, he argued, we may subconsciously believe we are at a higher risk of catching a disease from them.

Again, both these theories are incomplete: many disgusting and unattractive things do not, by themselves, elicit that specific uncanny feeling. We know that somebody sneezing on the subway exposes us to potentially dangerous pathogens, yet a subway ride is not an uncanny experience. “There are too many theories,” says MacDorman. “The field is getting messy, further away from science.”

The first clue there was something more complex going on came when neuroscientists began to explore what might be happening in the brain. In 2007, Thierry Chaminade of the Advanced Telecommunications Research Institute in Kyoto, Japan, and colleagues presented people with a series of computer-generated characters that resembled humans to varying degrees, while monitoring their brain activity in an fMRI machine. While it wasn’t the specific aim of the study, the results hinted at a new explanation for the uncanny. When the volunteers observed a character that appeared almost human, activity increased in the part of their brain responsible for mentalising – the ability to comprehend the mental state of another.

Mentalising is understood to be involved in feeling empathy. Could empathic pathways in the brain be responsible for mediating the uncanny response?

More evidence came in 2011 with a second fMRI study, this time led by Ayse Saygin at the University of California, San Diego. The researchers observed people’s brain activity while showing them video footage of a mechanical robot, a human and a lifelike android known to induce the uncanny valley response. Each of these were displayed to the participants performing an identical action – but one triggered a notably different result.

When people observed the human or mechanical robot walking, the brain exhibited very little activity. But when participants had to process the lifelike android doing the same action, activity increased considerably in the visual and motor cortices of their brains.

Saygin and colleagues suggested that the feelings of eeriness produced by watching the android may stem from the extra work the brain needs to do to reconcile the robot’s movements with the human-like behaviour it expects based on appearances.

It is thought that the motor cortex houses mirror neurons, which are specialised for a particular task and can also fire when we see another organism performing that task. While opinion remains divided on their role, these neurons have also been implicated in our ability to empathise with others.

The uncanny feeling, then, could be caused by a sort of dissonance in the system that helps us to feel empathy, says MacDorman (see illustration). “It seems related to the ability to feel what something else feels.”

“The uncanny feeling could be caused by a dissonance in the system that helps us feel empathy – the ability to feel what something else feels”

Mori didn’t know this when he wrote his essay in 1970, but he did leave the door open to the possibility. When MacDorman translated the essay into English, he made a crucial simplification. According to the 2005 translation, when we are in the uncanny valley, our feelings of “familiarity” plummet. This quality – along with “likeability” – has provided the framework for countless studies of the uncanny valley, says MacDorman – and this may have been obscuring its possible roots in empathy.

New Scientist Default Image

 

 

Suppressed empathy

Mori didn’t actually use the terms familiarity or likeability. Instead he used a neologism, shinwakan, which he invented because there was no opposite to the word uncanny. MacDorman now believes that shinwakan is actually a form of empathy. Last June, he published a new translation that he hopes will prompt researchers to look at the uncanny valley through this lens instead. “The fact that empathy is complex means we can tease it apart,” he says, “and figure out what is really at play.”

In cognitive neuroscience, empathy is often divided into three categories: cognitive, motor and emotional. Cognitive empathy is essentially the ability to understand another’s perspective and why they make certain decisions – to play “social chess”, as MacDorman puts it. Motor empathy is the ability to mimic movements like facial expressions and postures, and emotional empathy is essentially sympathy, or the ability to feel what others feel. MacDorman’s theory is that the uncanny feeling is produced when we experience certain types of empathy but not others. “The question,” he says, “is what kind of empathy is being suppressed?”

To test one possibility, MacDorman, now at Indiana University in Indianapolis, asked people to watch videos of robots, computer-generated characters and real people in situations ranging from harmless to harmful. He then asked the volunteers to categorise these characters as either happy or sad about their situations. In other words, he was measuring participants’ abilities to sympathise with the figures.

MacDorman found that they had a more difficult time determining the emotional state of characters that fell within the uncanny valley. This was, he believes, an indication that emotional empathy was being suppressed. On a cognitive and motor level, all the typical cues for empathy are triggered, but we can’t muster sympathy, he says.

Kurt Gray, a psychologist at the University of North Carolina, Chapel Hill, agrees that the uncanny valley is about our inability to feel certain types of empathy, and that we should start looking at the phenomenon differently. “What Karl did in terms of framing is really important,” he says.

Gray believes he has an explanation for why struggling to sympathise with human-like robots and animated characters would make us uncomfortable. In a recent study, he and Daniel Wegner at Harvard University asked volunteers to take a survey that measured their comfort level with various types of computer capabilities. The idea was to identify which human traits, when exhibited by a machine, make people uncomfortable.

The pair found that people thought computers capable of feeling emotions were the most unnerving. “We are happy to have robots do things, but not feel things,” they concluded.

Gray’s argument is that almost lifelike robots make us feel uneasy because we see in them the shadow of a human mind, but one that we know we can never comprehend. In other words, it’s not just about our failure to sympathise with uncanny robots and computer-generated characters; it’s also about our perception that they can empathise with us.

The particular brand of sympathy we reserve for other people requires us to believe the thing we are sympathising with has a self. And this concession of a mind to something not human makes us uncomfortable.

It follows that as long as we are aware that a robot or virtual character is not human, we will never grant it passage to cross the uncanny valley. Even if we do find a way to make artificial creatures with identical human features, they may still provoke discomfort if we know they are not like us. This possibility has already been explored in science fiction: consider how the human characters reacted to the cylons in Battlestar Galactica, says roboticist Christoph Bartneck of the University of Canterbury in New Zealand. “You have these robots indistinguishable from humans. That was what’s so scary. They are not like us. But they are like us.”

Perhaps this is what Mori was getting at when, years after he penned his essay, a reporter asked him if he thought humankind would ever build robots that crossed the uncanny valley: “Why try?” he responded.

The idea that the uncanny valley may be impossible to cross may come as bad news to Hollywood and robot designers. But it also stands as a sign of something many will find reassuring: that there is a particular feeling of empathy that only humans can share.

  • Like 1
Link to comment
Share on other sites

2 minutes ago, SteamyTea said:

I am not usually bothered by this sort of thing.

But with those two I am.

 

Freaky feeling: Why androids make us uneasy

We're often creeped out by human-like robots or animated characters, but what they do to our minds is more complex than you might think

 

LIFE 9 January 2013

By Joe Kloc

 

New Scientist Default Image

Too close for comfort

(Image: Timothy Archibald)

 

See more in our gallery: Uncanny android sightings, from Freud to Hollywood

EIGHT years ago, Karl MacDorman was working late at Osaka University in Japan when, around 1 am, his fax machine sputtered into life. Out came a 35-year-old essay, written in Japanese, sent by a colleague.

It was an intriguing read for MacDorman, who was building hyperrealistic androids at the time. It warned that when artificial beings have a close human likeness, people will be repulsed. He and his colleagues worked up a quick English translation, dubbing the phenomenon the “uncanny valley” (see diagram).

 

They assumed their rough draft of this obscure essay would only circulate among roboticists, but it caught the popular imagination. Journalists used the uncanny valley to explain the lacklustre box office performance of movies like Polar Express, in which audiences were creeped out by the computer-generated stars. It was also blamed for the failure of humanoid robots to catch on. Finding an explanation for why the uncanny valley occurs, it seemed, would be worth millions of dollars to Hollywood and the robotics industry. Yet when researchers began to study the phenomenon, citing MacDorman’s translation as the definitive text, answers eluded them.

MacDorman now believes we have been looking at the uncanny valley too simplistically, and he partly blames his own rushed translation. He and others are converging on an explanation for what’s actually going on in the brain when you get that uncanny feeling. If correct, the phenomenon is more complex than anyone realised, encompassing not only our relationship with new technologies but also with each other.

While it’s well known that abnormal facial and body features can make people shun others, some researchers believe that human-like creations unnerve us in a specific way. The essay that MacDorman read was published in 1970 by roboticist Masahiro Mori. Entitled “Bukimi No Tani” – or The Valley of Eeriness – it proposed that humanoid robots can provoke a uniquely uncomfortable emotion that their mechanical cousins do not.

 

For decades, few outside Japan were aware of Mori’s theory. After MacDorman’s translation brought it to wider attention, his ideas were extended to computer-generated human figures, and research began in earnest into the uncanny valley’s possible causes.

MacDorman’s first paper on the subject examined an idea proposed by Mori: that we feel uncomfortable because almost-human robots appear dead, and thus remind us of our own mortality. To test this, MacDorman used something called terror management theory. This suggests that reminders of death govern much of our behaviour – including making us cling more strongly to aspects of our own world view, such as religious belief.

So MacDorman asked volunteers to fill in a questionnaire about their world views after showing them photos of human-like robots. Sure enough, those who had seen the robots were more defensive of their view of the world than those who had not, hinting that the robots were reminding people of death.

This explanation makes intuitive sense, given that some animated characters and robots appear corpse-like. But even at the time it was clear to MacDorman that the theory had its limits: reminding someone of their own demise does not, on its own, elicit the uncanny response people describe. A gravestone reminds us of death, for example, but it doesn’t make us feel the same specific emotion.

Competing theories soon emerged. Some researchers blamed our evolutionary roots; we have always been primed to shun unattractive mates, after all. Others blamed the established idea that we evolved feelings of disgust to protect us from pathogens. Christian Keysers of the University of Groningen in the Netherlands pointed out that irregularities in an almost-human form make it look sick. Since uncanny robots look very similar to us, he argued, we may subconsciously believe we are at a higher risk of catching a disease from them.

Again, both these theories are incomplete: many disgusting and unattractive things do not, by themselves, elicit that specific uncanny feeling. We know that somebody sneezing on the subway exposes us to potentially dangerous pathogens, yet a subway ride is not an uncanny experience. “There are too many theories,” says MacDorman. “The field is getting messy, further away from science.”

The first clue there was something more complex going on came when neuroscientists began to explore what might be happening in the brain. In 2007, Thierry Chaminade of the Advanced Telecommunications Research Institute in Kyoto, Japan, and colleagues presented people with a series of computer-generated characters that resembled humans to varying degrees, while monitoring their brain activity in an fMRI machine. While it wasn’t the specific aim of the study, the results hinted at a new explanation for the uncanny. When the volunteers observed a character that appeared almost human, activity increased in the part of their brain responsible for mentalising – the ability to comprehend the mental state of another.

Mentalising is understood to be involved in feeling empathy. Could empathic pathways in the brain be responsible for mediating the uncanny response?

More evidence came in 2011 with a second fMRI study, this time led by Ayse Saygin at the University of California, San Diego. The researchers observed people’s brain activity while showing them video footage of a mechanical robot, a human and a lifelike android known to induce the uncanny valley response. Each of these were displayed to the participants performing an identical action – but one triggered a notably different result.

When people observed the human or mechanical robot walking, the brain exhibited very little activity. But when participants had to process the lifelike android doing the same action, activity increased considerably in the visual and motor cortices of their brains.

Saygin and colleagues suggested that the feelings of eeriness produced by watching the android may stem from the extra work the brain needs to do to reconcile the robot’s movements with the human-like behaviour it expects based on appearances.

It is thought that the motor cortex houses mirror neurons, which are specialised for a particular task and can also fire when we see another organism performing that task. While opinion remains divided on their role, these neurons have also been implicated in our ability to empathise with others.

The uncanny feeling, then, could be caused by a sort of dissonance in the system that helps us to feel empathy, says MacDorman (see illustration). “It seems related to the ability to feel what something else feels.”

“The uncanny feeling could be caused by a dissonance in the system that helps us feel empathy – the ability to feel what something else feels”

Mori didn’t know this when he wrote his essay in 1970, but he did leave the door open to the possibility. When MacDorman translated the essay into English, he made a crucial simplification. According to the 2005 translation, when we are in the uncanny valley, our feelings of “familiarity” plummet. This quality – along with “likeability” – has provided the framework for countless studies of the uncanny valley, says MacDorman – and this may have been obscuring its possible roots in empathy.

New Scientist Default Image

 

 

Suppressed empathy

Mori didn’t actually use the terms familiarity or likeability. Instead he used a neologism, shinwakan, which he invented because there was no opposite to the word uncanny. MacDorman now believes that shinwakan is actually a form of empathy. Last June, he published a new translation that he hopes will prompt researchers to look at the uncanny valley through this lens instead. “The fact that empathy is complex means we can tease it apart,” he says, “and figure out what is really at play.”

In cognitive neuroscience, empathy is often divided into three categories: cognitive, motor and emotional. Cognitive empathy is essentially the ability to understand another’s perspective and why they make certain decisions – to play “social chess”, as MacDorman puts it. Motor empathy is the ability to mimic movements like facial expressions and postures, and emotional empathy is essentially sympathy, or the ability to feel what others feel. MacDorman’s theory is that the uncanny feeling is produced when we experience certain types of empathy but not others. “The question,” he says, “is what kind of empathy is being suppressed?”

To test one possibility, MacDorman, now at Indiana University in Indianapolis, asked people to watch videos of robots, computer-generated characters and real people in situations ranging from harmless to harmful. He then asked the volunteers to categorise these characters as either happy or sad about their situations. In other words, he was measuring participants’ abilities to sympathise with the figures.

MacDorman found that they had a more difficult time determining the emotional state of characters that fell within the uncanny valley. This was, he believes, an indication that emotional empathy was being suppressed. On a cognitive and motor level, all the typical cues for empathy are triggered, but we can’t muster sympathy, he says.

Kurt Gray, a psychologist at the University of North Carolina, Chapel Hill, agrees that the uncanny valley is about our inability to feel certain types of empathy, and that we should start looking at the phenomenon differently. “What Karl did in terms of framing is really important,” he says.

Gray believes he has an explanation for why struggling to sympathise with human-like robots and animated characters would make us uncomfortable. In a recent study, he and Daniel Wegner at Harvard University asked volunteers to take a survey that measured their comfort level with various types of computer capabilities. The idea was to identify which human traits, when exhibited by a machine, make people uncomfortable.

The pair found that people thought computers capable of feeling emotions were the most unnerving. “We are happy to have robots do things, but not feel things,” they concluded.

Gray’s argument is that almost lifelike robots make us feel uneasy because we see in them the shadow of a human mind, but one that we know we can never comprehend. In other words, it’s not just about our failure to sympathise with uncanny robots and computer-generated characters; it’s also about our perception that they can empathise with us.

The particular brand of sympathy we reserve for other people requires us to believe the thing we are sympathising with has a self. And this concession of a mind to something not human makes us uncomfortable.

It follows that as long as we are aware that a robot or virtual character is not human, we will never grant it passage to cross the uncanny valley. Even if we do find a way to make artificial creatures with identical human features, they may still provoke discomfort if we know they are not like us. This possibility has already been explored in science fiction: consider how the human characters reacted to the cylons in Battlestar Galactica, says roboticist Christoph Bartneck of the University of Canterbury in New Zealand. “You have these robots indistinguishable from humans. That was what’s so scary. They are not like us. But they are like us.”

Perhaps this is what Mori was getting at when, years after he penned his essay, a reporter asked him if he thought humankind would ever build robots that crossed the uncanny valley: “Why try?” he responded.

The idea that the uncanny valley may be impossible to cross may come as bad news to Hollywood and robot designers. But it also stands as a sign of something many will find reassuring: that there is a particular feeling of empathy that only humans can share.

I want robot / androids . Things I can shout abuse at without response . Things I can torment for pleasure . Slaves to my whims . A future of passive machines to humour me . 

Link to comment
Share on other sites

On 03/01/2021 at 19:29, BartW said:

I would like to make it a fairly smart home that will be up to date for years to come.

 

Mutually exclusive I'm afraid - except if you have 100% control over the equipment. This rules out any cloud based products - except if you host your own cloud.

The people here who use Raspberry Pi SBCs and have re-flashed various 'smart home' products with Tasmota, or rolled their own from scratch, have not only saved a great deal of money but have also gained a large degree of independence from an industry very much in its infancy.

 

I do make one significant exception in my own home and that is for Amazon Alexa, for one very good reason:  Amazon have a wide open API which allows me to write every line of code for my smart home devices (either built from scratch or Tasmota'd) while being able to take full advantage of their 'highly presentable hardware' that is frequently sold at a ridiculous loss. BUT this is providing an optional control interface - local control through soft switches and web interfaces hosted on phones and tablets makes everything equally accessible when required.

 

At the very least I would recommend bearing in mind this alternative DIY approach and comparing it to whatever you are thinking of signing up to - ask yourself what would happen if the company stopped supporting your hardware or disappeared overnight.

  • Like 1
Link to comment
Share on other sites

9 minutes ago, Radian said:

independence from an industry very much in its infancy

This is indeed where we are . HA has been slow progress over the years . Average person doesn’t really care - the novelty of turning a light on from your phone or Alexa is about the limit . But ; ultimately- this is where we are heading . I wish ( as others ) to retain control I.e no or little cloud based . 

Link to comment
Share on other sites

6 hours ago, JohnMo said:

Humidity controls work, but not reliably in our climate. No point in automation.

 

Ours works perfectly - not a single false positive or negative in the years it's been in place. Rate of rise detection is the secret (>5% in 5 minutes works for me), not absolute threshold triggering. It's the perfect alternative to trying to train partners and kids! 

 

humiditygraph.png.7fb4df3728e4b9115d4fc945c0bc8f57.png

Edited by MJNewton
  • Like 1
Link to comment
Share on other sites

I am experimenting with Apple Homekit which no-one has mentioned yet. I have bought a Homepod mini, a smartplug, a bulb and a door opening sensor and got them all set up and working very well over Thread in the one room they are installed in.

 

Bouyed by the impressive performance - ie instant and consistent activation I bought another bulb and a couple of further door sensors for our two outbuildings. The bulb - in an adjacent room - seems to be linked into the Thread mesh now and working. The two door sensors however appear to be too far away to join the mesh. So I will probably need to buy at least one ore smartplug to extend the mesh and I am hoping that will be enough to bring these sensors into the overall network. 

 

I am mainly interested atm in building a security system over Thread - it is looking promising so far - but down the line hope to instal Thread enabled smoke and heat dectectors and automated blinds.

  • Like 1
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share

×
×
  • Create New...