Jump to content

To Raspberry Pi or not, that is the question.


TerryE

Recommended Posts

I ran a Pi for several years non-stop without problems - though only converting BBC DASH audio streams, not writing to the card.

 

I'm currently planning to use one again, this time for monitoring and enhancing the control of UFCH (in conjunction with a Shelly Pro 4). For me, the advantages are the ability to easily mount a Pi on a DIN rail in a consumer unit (more-or-less essential for this project), and the availability of the PiJuice to bridge short power outages and handle associated shutdowns and restarts (though I've not tried it yet), without worrying about heat. I may upgrade from an SD card to an mSATA SSD using the Geekworm X850 (or similar) shield to add robustness.

 

In other circumstances I may choose a different option, so I'm watching this discussion with interest.

  • Like 1
Link to comment
Share on other sites

23 hours ago, SteamyTea said:

Done.

13:00 09/02/2023 (ish)

 

Rotate_20230209_130405_5858.jpeg

Part 2

Works out as 1.42W for an early RPi Zero W and the CC energy monitor.

Rotate_20230210_130345_5434.thumb.jpeg.0e789bfef1305114984b5417cccada27.jpeg

 

Going to pop it on the other RPi that is just logging one temperature and uploading it to the dark web via a small webserver.

 

Link to comment
Share on other sites

Finally got around to digging out my old solar power battery banks.

This one seems shot.

IMG_20230215_132717780.thumb.jpg.8c771189306e79297633691806699647.jpg

 

Not sure what to do with it. May set fire to it just to see what happens.

 

The other one was ok.

So fully charged it last night, it took 26 Wh.

Then put it in the power line for my back garden temperature logger, popped a couple of energy loggers on it, and waiting to see how it goes.

So far it has drawn nothing from the mains and almost 2 Wh from the battery/solar.

Rotate_20230215_132449_9012.thumb.jpeg.6b0639c6c6f085f3221909d967d7e305.jpeg

 

Got it set up on a little stand on the back (NE facing) window cill.

IMG_20230215_133503792.thumb.jpg.7c98ff738dde3eedded9312028120727.jpg

 

Shall see what the meters read tomorrow.

  • Like 1
Link to comment
Share on other sites

2 hours ago, SteamyTea said:

This one seems shot.   Not sure what to do with it. May set fire to it just to see what happens.

Shot is an understatement!  It looks like it is about to pop!  It needs junking / replacing.  If you do recycle, then stick in an airtight plastic bag so that if it does pop then the lithium won't get exposed to air and spontaneously combust. 

Edited by TerryE
  • Like 1
Link to comment
Share on other sites

Going back to my OP and "shall I move HA etc. off RPis", I have now got my laptop-based ProxMox set up working stably.

  • The laptop is used as a NUC with inbuilt battery backup.  It's got a 6 year old core I5 CPU that has about 3-4× the power of an RPi4, 8Gb RAM and 1Tb SSD.  Absolutely poodles for anything I need.
  • Proxmox supports both VMs and LXC containers. I have:
    • The standard HA VM install in 1 VM courtesy of Proxmox Helper Scripts (PHS)
    • A VPN gateway also from PHS so I can remote in to my LAN securely
    • A Docker LXC also from PHS.
  • Most of my other services run within this last Docker environment and these were migrated from an RPi4 that hosted the Docker environment that I used for my application services : [nginx, php8-fpm, mariadb and redis] for my blog; [ftp] for outside cameras; [pihole, and unbound] for ad-blocking and private DNS.

So this laptop does everything that I need and with lots of headroom in terms of processing power, RAM and storage.

 

Note that I prefer using a Docker Compose project for my application stack (as opposed to native ProxMox LXCs) because this way the entire stack setup is defined in some 750 lines of configuration, split across less than 20 files, and all of this under GitHub configuration management.  OK, using Docker means that you can't just hack issues into your runtime setup;  you need to resolve them point-by-point, and then update the YAML and other config files to reflect this, so the debug journey is more convolved, but the end result is very clearly and crisply defined.  Also given that LXCs and namespace mapping are all supported with the Linux kernel, the overhead introduced by Docker is small: the main runtime impact is the slight overhead of the docker-proxy processes.

 

So I now have 2 × RPi4 and 1 × RPi2B free to use on my IoT projects. 🙂

Edited by TerryE
Link to comment
Share on other sites

14 minutes ago, TerryE said:

have now got my laptop-based ProxMox set up working stably.

Just for a laugh, put an energy monitor in it and see what it uses.

 

As an aside, I have found out why my cheap 'solar UPS' failed.

It was an old, and very cheap, battery bank, so when it was left for a few hours, the voltage dropped too low to run the RPi, even when plugged into a 2A charger.

Was still worth a go, and I know not to do it again.

Link to comment
Share on other sites

2 hours ago, SteamyTea said:

Just for a laugh, put an energy monitor in it and see what it uses.

Already done that. Around 15VA when run at the few % max utilisation that it currently runs at, or roughly 10p/day at current prices.

 

Given that this acts as 15W heater in the winter which is needed anyway, the net cost  is 0.025 × 17 × tariff increment or 4p/day.  However my 2 × RPi4s ran at ~ 10W.  They are powered down now, so the net delta is a third of this, or just over 1p

 

In the noise.

Link to comment
Share on other sites

33 minutes ago, Nick Thomas said:

I have a https://pcengines.ch/apu2e4.htm which claims "6-12W depending on load". It runs my whole online presence, not just the home automation. It'll even do docker if that's your thing ^^.

 

I've been running an APU2D4 as an OPNsense router for nearly four years and it's been rock solid.

 

Good support too - I had an issue when I first tried to use it (turned out to be my laptop's ethernet port), and they were highly responsive and helpful.

  • Like 1
Link to comment
Share on other sites

@Nick Thomas @jack, TBH apart from my 1st home PC 30 years ago, I haven't bought a pre-configured PC. I've always built and configured my own by buying the MB, CPU, case and peripherals in the past so if I was doing this now, I would probably do very much as you suggest.  

 

My 1st private laptop was a freebie from the Innotek guys at Sun in 2008 IIRC, a thank-you for setting up their VirtualBox forum and being a moderator on it.  I  am not really interested in some beast that is needed to run Win11 adequately, as I haven't used WinX for non-work use for over well over 15 years (except that I did need dual-boot for some VirtualBox development).

 

Since switching to Ubuntu, I have used ex-business laptops which you can pick up for a quarter of the buy-new price -- if you are willing to accept a scuff or two, then maybe adding extra RAM and upgrading the HDD (or later swapped in SSD)  At these prices,  I personally don't think it's worth buying a new NUC style device.  I have also been buying the odd RPi and lots of ESP SoCs for IoT and hobby work over the last 7 years or so. 

 

The current laptop that I am using for ProxMox had been sitting powered off on a shelf in my office for over a year.

 

My one extravagance these days is that I use a (bought-new and now) almost 2 year-old Acer Spin 715 Chromebook for all of my interactive work and internet browsing an viewing. This is twice as powerful as my last Ubuntu laptop, and less than half the weight; the build quality and weight are on a par with a Macbook. I also SSH or HTTPS into all of my servers from this as well, and I use the native Debian LXC that runs within ChromeOS a lot. 

 

As to Docker, this is stack that runs this forum.  Brilliant, IMO.  

Link to comment
Share on other sites

I realise that this is a bit of polemical digression, but maybe no more than Nick's about RPi power use.  30 years ago, I was the technical manager for a ~100 strong SW project.  We used an Oracle code generator  which took around 50K "lines" of logical design, business rules, screen definitions , and generated around 1M lines of actual code -- except that the developers then needed to tweak maybe 5-10% to do the actual functionality that was require.  This proved to be a one-way trap: you were left with ~1M+ lines of code to maintain.  Yes, the Oracle framework could allow devs to update the logical design, etc and regenerate the code, but this would lose these customisations that then need to be reapplied manually -- or you just worked with the 1M line code base from then on. 

 

Since this experience, I have had a total aversion of any form of generator system which isn't properly orthogonal and in some sense minimal, as these rapidly become unmaintainable.  You as the author will have enough trouble remembering what and why you did something a year or two ago, let alone if you need someone else to pick up this gauntlet.

 

With Home Assistant, if you want to add a service like MQTT for example, then you click one button to install it and configure one file though a configuration tab to change maybe a dozen or so configuration lines and that's it: you have a working service that is version updatable and properly backed-up against failure.  If you do your own application stack with Docker you might end up with <100 lines of setup and config per service, and all within a single file hierarchy that you can maintain in git.  Yes you could natively do this on a dedicated server, but where is what you have installed and what files were changed tracked and documented?

 

With ProxMox, there is a nice website which bundled up a whole load of installation scripts to create VMs and LXCs for dozens of services and application: cut and paste the install command, and answer a few Qs; you then have a running MariaDB LXC, whatever.  However, from the perspective of maintenance and configuration management, this is no better than running a dedicated server per application.  Hence I use one VM for HA, and a Docker LXC for all of my other applications and services.  This might sound inefficient: running a bunch of containers inside a ProxMox container, but in reality all these containers run as LXCs over the Linux kernel (which supports nested namespaces, etc).  There is a slight overhead for the docker-proxy daemons handling the mapped sockets.  So long as the lower FS is something such as ext4 that works well overlayFS, the CPU hit is minimal.

Edited by TerryE
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...