Jump to content

Pocster

Members
  • Posts

    14132
  • Joined

  • Last visited

  • Days Won

    29

Everything posted by Pocster

  1. Bet @Nickfromwales bought it !
  2. I like a flip , but …. Piss take !
  3. Progress has been interesting . qwen coder is a bit spazzy at writing code from prompts - too many errors . We have a pipeline where automatically the error goes to derpseek that rewrites the prompt for coder again ! But at this stage ChatGPT reins king ; it defines and (expletive deleted) me ! Makes me test every single step . It does the code as it’s small snippets and documents them a bit too much ! But it’s a rock solid implementation. Proper SE stuff ! . So structure / framework at this stage is complete . Added the uma8 multi microphone array . All good . In the next few days expect the engineer dashboard to show wake word recognised , stt . At that point things get interesting . It’s going to be a massive project in complexity and I’m in awe what one person can achieve with llm . Will swap back to deep seek and a better coder model when I get something with more ram - which could be some wait ; though I do smell an m5 ultra flip opportunity…. 😊
  4. Coder who uses llm absolutely! . It’s not about writing code anymore - as I said it’s orchestration of the llm ! ( using the tool ) The issue I see is less juniors / graduates and when ‘ seasoned ‘ programmer / se retire who carries the knowledge/ understanding forward ?
  5. Yes, and China has already started grappling with this. Chinese courts have reportedly ruled that companies cannot simply fire someone purely because AI can do the job cheaper. That proves this is not just imaginary pessimism — governments and courts are already seeing AI replacement as a labour-market issue. And that is exactly my point. If AI were merely “another productivity tool”, why would courts need to decide whether workers can be dismissed because an AI system now performs the role? The fact they are having to rule on it shows the disruption is real. It may make the cake bigger overall, but it can still destroy specific jobs, squeeze wages, and collapse small teams into one person plus AI. That is a major structural change, not just canals becoming railways.
  6. Another point is the speed of improvement. I accept local LLMs are not frontier models. They are behind the best cloud systems. But the pace is ridiculous. Every few weeks there seems to be a better open/local model, better quantisation, better tooling, better context handling, better coding ability, or better inference speed. That matters because the argument is not “can today’s model replace everyone?” The argument is “where is this going over the next 3, 5, or 10 years?” I’ve never personally seen a technology move this fast. With most technologies you get gradual product cycles. With LLMs, the capability jump over months is noticeable. A model that felt barely useful a year or two ago can now write, debug, explain, summarise, plan and generate code well enough to materially change how one person works. So yes, today’s local models are not AGI and not frontier. But the gap is closing fast enough that dismissing this as just another normal productivity tool feels complacent.
  7. I agree it can make the cake bigger. I’m not saying AI is only bad or that we should ignore it. But “the cake gets bigger” doesn’t mean the slices are evenly distributed. Yes, canals to railways to roads changed employment. But those transitions still destroyed some jobs, shifted power, and forced people to retrain. The fact society eventually adapted doesn’t mean the disruption wasn’t real for the people caught in it. The difference here is speed and breadth. LLMs are not replacing one transport system with another. They touch almost every desk-based industry at once: coding, admin, sales, marketing, support, accounts, legal prep, design, analysis, documentation. My own project is a good example. What would once have needed a small team is now potentially one person directing an LLM, with the AI doing much of the manual coding and iteration. That is brilliant for me as the person using it, but it also means fewer people are needed to produce the same output. So yes, learn to use it. I completely agree. But that doesn’t remove the labour-market issue. In fact it proves it: those who use it well become far more productive, and those who don’t are under pressure. That is not just “another technology” in a mild sense. That changes the structure of work.
  8. My project would be a small team . Now it’s 1 person who doesn’t need to manually code . Just this in its own changes everything
  9. Quite possibly . You also have to take into account your context window ( chat bot window ) so it depends exactly on what your use is for llm . You can go really small I.e low ram footprint but you are comprising reasoning and accuracy . So it really does depend on intended use . Run in 8gb , yes , useful ? Depends on use .
  10. Maybe — but I think that understates what’s different this time. Most past technologies made human labour more productive. LLMs do that too, but they also start to substitute for parts of knowledge work: drafting, coding, support, admin, research, analysis, design, marketing, legal prep, teaching material, etc. That doesn’t require full AGI. You don’t need a conscious machine to reduce headcount; you only need a tool that lets one person do the work that previously needed three, or lets a cheaper worker do work that previously needed a specialist. So even if LLMs are “only” productivity tools, the labour-market effect can still be huge. The Industrial Revolution didn’t replace every worker overnight either — it reorganised whole industries, compressed wages in some areas, created new winners, and made old skills less valuable. My concern isn’t that every job vanishes tomorrow. It’s that large areas of white-collar work become more automated, more competitive, and need fewer entry-level people. That alone is enough to be disruptive without invoking AGI. My project would be a small team . Now it’s 1 person who doesn’t need to manually code . Just this on its own changes everything - a hobby project that proves the workflow …
  11. Tricky on Mac and windows . You have to remember Mac is more lean than windows . Linux leaner - but more work . I wouldn’t fight this aspect tbh
  12. Good ! This thread is about llm’s - they require Ram thanks though @JohnMo fo
  13. For a programmer / SE certainly junior roles have been reduced significantly. Why employ someone when a llm can do it ? . There’s virtually no need to write code anymore 🤯. No need to learn all the libraries . Now it is orchestration for the human . I discussed this with ChatGPT . It worded it as coding is now cheap and virtually no labour required ! I’m in a fortunate position as I can experiment, spend money , no boss , endless end goal , no risk . Chat agrees that my full project would require pre llm a small team of programmers and would take years to develop . Now there’s one , me , zero code to do . 🤯🤯🤯. For reference as I have only 96gb I have the smallest possible context windows. This apart from reducing ram requirements also avoid hallucinations. Deepseek and qwen are fed fresh prompt each time not historical reference required . ChatGPT of course needs the history / context - but produces no code . I’ll say it again , it’s insane !
  14. Right , now, let’s do this slowly I didn’t check every website on Earth . You need enough ram to be useful . 24gb once booted won’t leave a lot for a useful llm . local llm can even be installed on a pi for example . The keyword is useful - in essence you’d want 64gb min on a Mac as it uses unified ram . People run these things on much less . These units and above have all but disappeared. Ultra m3 ( go check ) on eBay ( assuming they have real stock ) are on for 5k base model prices . So that’s 1k more than Apple retail them for . Openclaw etc has made a lot of people punt on a relatively cheap Mac mini to experiment with .
  15. I’ll give you a 69
  16. A sane person . Problem is unified ram . Best on Nvidia is 96gb on rtx6000 that’s a 10k gpu card !!!! This is why Apple have ( unintentionally) jumped to number 1 for local llm .
  17. For completeness yes it’s on Amazon apparently at an inflated price .
  18. Yeah it said “ out of stock “ so that’s a clue . Screen grab is from now !
  19. Notice Mac minis basically out of stock and even a m3 ultra 96gb ( base model ) no where to be seen . I got mine from curry’s of all places ! , check it every few days to judge demand - now they have none ! I think when I sell the m3 I’ll get a pretty good price for it even with m5 released because it’s all going to sell out fast !
  20. After lots of reading ( for probably months ) , YouTube videos etc etc . Enough to confuse anyone I decided to start Avalon . chatgpt and I are architects we setup llama etc , vs studio , ssh into m3 etc etc , even parsec screen share Deepseek is our reasoner . Qwen coder next our sweat shop “ just do it ! “ coder ! chat managed to create me some scripts so we create a brief for today’s task . Get Deepseek to take that and produce a detailed instruction list for qwen . So process is Create task Give to Deepseek Approve Deepseek output Tell qwen to do it ( for reasons I’m unsure of we don’t use git diffs but json - I didn’t argue as it seems to work ) Then some kind of test . Chat gets excited about this ; me less so when terminal window says “ it worked ! “ 😊 But we are building framework and structure first . I’m blown away by this all locally ! llm , creating document for llm to produce detail for another llm Crazy !! 90% of programmers obsolete ! Just todays tasks would have been weeks of learning and work .
  21. I’ll use chat as my reasoner but I need local to do my local actioned stuffed until I can source enough ram . Bigger model did mean better but now smaller better optimised models can out perform a larger one . Obviously there’s a point when you can’t go smaller without too much compromise . If you ask ChatGPT itself how long before a model matches its capabilities of today in a ‘ reasonable ‘ size it reckons within 2 years a 256gb model will be comparable to ChatGPT 5.4. That’s insane ! To have that capability sat on your desk in a relatively low ram config. Moe and potentially’ bolt on ‘ specialists seems to be the next level . A medical specialist , maths specialist etc plugged in to the reasoner.
  22. Even if Mac unified memory and iPhone RAM aren’t identical final packages, they come from the same constrained LPDDR supply chain. Apple then has to allocate that supply across products, and iPhone will obviously take priority over niche high-memory Mac configs.
  23. TSMC doesn’t make RAM, no one said they do. TSMC is the SoC bottleneck. The RAM shortage is a separate bottleneck, and Apple clearly isn’t immune. Other manufacturers have openly said memory shortages are affecting them, and Cook has said Mac mini / Mac Studio supply-demand balance will take several months. High-memory Mac mini / Studio configs are also exactly where the worst delays and unavailable options have shown up. There are also multiple reports suggesting the memory shortage could last well into 2027 and possibly beyond, especially as AI demand keeps soaking up supply. So pointing out that TSMC doesn’t make RAM doesn’t disprove the RAM issue; it just identifies a different bottleneck.
  24. 🙄🙄🙄🙄😉
  25. Well ! Unfortunately as I presume you now realise the ram shortage is a massive problem - yes , even for Apple . It did look obvious tbh - so now I have to wait for any indicator of ultra appearing and jump on it ! I guess it could be announced at wwdc - maybe have a date for pre orders . Buys Apple time to acquire ram whilst still be bigging up their product . That’s the best I can hope for - a definite date to refresh Apple Store !
×
×
  • Create New...