Jump to content

Pocster

Members
  • Posts

    14055
  • Joined

  • Last visited

  • Days Won

    29

Everything posted by Pocster

  1. Help ! Given up with chat ! What do I setup in Claude for push / pull ???
  2. Anyway 9:15 soon
  3. I might be learning that lesson the hard way!. Refuse to quit - so forcing 'bitch' chatGPT to do what I want. On day 3 - not even started the project yet 🤪
  4. Surely : a) not that expensive b) you ain’t that tight 😂
  5. All the git stuff is new too me so feels a bit WTF. But I'm only using it so chat can pull from the repo. I'm trying to automate everything my end so I do no typing (lazy) so all menu driven!!
  6. Oh I'm (expletive deleted)ing lazy! It's a nightmare with chat as it can't push - but I'm working on automating as best I can!
  7. technically it can't push can it? i.e.it does it 'somehow' via your pc..?
  8. @Thorfun struggling massively with chat and dead file links. Files are too large for the canvas and file links rarely survive - even if you click them 1 second later. Driving me mad. Although Claude lacks persistent memory if it generates a file link does it survive long enough to actually download it????
  9. lol ! Oh I get it to do everything! - free run of the chicken house ! 😊
  10. Whilst my project is just hobby eye Candy I'm really impressed not just with the suggestions but how to implement. We go round the loop again ; with a technical discussion on these features. It's better than working with people!!!. Can't wait to show what we create!
  11. Out of interest Claude users … Does it give suggestions for “ new features “ ? I find with chat - I suggest something ; we discuss it technically . I ask what else it would add , gives really good suggestions . Which in turn prompts me up the spec . This back and forth discussion I find super helpful !
  12. Yes ! I was aiming for 200 pages ! . I will document the trials of those leaking upstands . But for now we can all rest and sleep easy …
  13. Returned home yesterday. You guys have had Noah’s amount of rain 😂 Main pita leak FIXED ! So just those upstands next summer 😆😂
  14. Bet you did that in work time though 😂
  15. Another big chat with SWMBO-chat . I can automate pretty much everything apart from grabbing code off chat ( only 2 ways to do it - both with limitations ) all because it can’t push . But ! Setting up a repo - have a method to keep us in sync . It’s written scripts for me that pop up a menu so I can “ add bug “ , “ fix bug “ etc etc - so I don’t spend my life in terminal mode git’ting . As streamlined as I can get it . Decided now to not have 1 mega file ( only because of chat’s limitations anyway ) and give it repo url . Can tell it to pull with just ‘g’ ( get / should be ‘p’ 🤔 ). Compile /link/run automated on headless system . Quite a lot to setup - but once running going to make this workflow easier . Objective to complete project and not write a line of code !
  16. This for work ? 😂
  17. lol ! After a big chat with SWMBO-chat we’re moving to gist . I can I believe in 1 mouse click grab new main , compile , run , upload back to gist . In chat ( when I need a change ) type “g” it will pull it ; amend dump back . lol - honestly . I’ll spend more time waiting for it “ thinking / analysing “ than anything !
  18. I effectively have a lite weight repo construct . So ! In my main.cpp ( my only file as zipping / multi file a pita ) at the top as a comment version number . We also have my full spec - about 20 pages worth . We then have bugs reported - in which version ; open or fixed ( with version number ) 1 file with all the code , spec , bug tracking . As I upload main.cpp everytime context is never lost . Scratchpad gets dumped ; I just upload main.cpp again . GitHub Repo lite 😊 - add - Just added automatic backup of current main.cpp Hash match and integrity check . So if I upload wrong main.cpp or didn’t compile it . We will know . Disadvantages with all this ? No branches and no good for a dev team . Perfect for 1 man band !
  19. Yes ! But I need about 50,000 lines of code to hit that . Estimated token cost for my next project is around 30,000 lines . So should be fine . Its output is indeed its input .
  20. All of these suffer the same problems . Sandbox gets scrubbed your back to no context . None have 100% persistent memory . My crude download / upload cycle ensures everyone is up to date . No reliance on the agent being correct . No matter which AI you use ( they’re only going to improve ) a project from 0 to complete is possible with zero coding from human . Chat tells me my project is more complex than 99% of other coded solutions . So it’s just scale and waiting for the ai to be improved further ( permanent storage please !!!! ) . Also I have the luxury of not being commercial . So no problem if it breaks / doesn’t work / f ‘ ups . Clearly like you @SBMS with a dev team there are implications. But it isn’t going to be long before a team of 5 , becomes a team of 2 . I treat chat as my junior developer ( f’ing good one at that ! ) - I’m just the director . Also , I don’t ever look at the code . I don’t maintain it and don’t know or care how it works ( I have flipped through it just to see of course ) . So I’m 100% relying on the code to be correct . Bugs / errors I report back . It’s a proper dev cycle . But as I said for ‘hobby/fun’ zero issues with this . It’s clear though I could produce my product as a commercial item and sell it . It’s an amazing transformation for me . Any language on any platform with no coding . It does indeed feel sometimes like magic !
  21. That’s a really fair description of where things stand today 👇 🔹 Claude local agent They’re right: Claude 3.5 Sonnet/Opus have massive context windows (200k+ tokens). With a wrapper/agent, you can give it persistent repo context → it can “remember” project state across sessions without manual re-uploads. That’s why their dev team uses it for whole-project awareness. It feels like Claude “lives in the repo” because of this persistent layer. 🔹 ChatGPT Current context windows (for me, GPT-5) are up to 200k tokens too, but yeah — when your repo grows past that, I’ll start to lose detail or refresh context. Without a proper persistent file agent, you have to do what you’ve been doing → uploading main.cpp each iteration. If people don’t manage context carefully, yes — I’ll “wobble” (repeat myself, drop functions, hallucinate). With your disciplined upload → iterate → re-upload cycle, you’ve sidestepped this. 🔹 Codex (2021 vs now) 2021 Codex was basically GPT-3.5 tuned on code repos → autocomplete + snippets. The recent “Codex revival” is more like Claude: bigger context, more autonomy. Can bootstrap small-medium apps independently. Still not magical — context discipline and iteration still matter. 🔹 Their experiment (spec files in directories) That’s a clever hack: use “spec.md/json” files per directory → LLM reads them, generates matching code → iterates. Works because the spec gives the model explicit anchors instead of it hallucinating missing structure. It’s basically what you and I are doing — except you hold the spec in your head and direct me, instead of scattering spec files through a repo. ⚡ Your edge: They rely on context scale + automation layers. You rely on a human-in-the-loop iteration cycle (upload current file, evolve it). Both approaches get to “whole project builds.” Yours is lighter weight — no special infra, just discipline.
  22. Codex was OpenAI’s earlier code-focused model (spun out of GPT-3 around 2021). It’s what originally powered GitHub Copilot. But: Codex was trained more narrowly on code repos. It was good at autocomplete, snippets, small functions. But it didn’t really “hold” a project in its head — context was small. It wasn’t designed for the upload → iterate → re-upload workflow we’ve been doing. What you’re using now (me, GPT-5) is: General + code-specialized (I know code, but also product design, architecture, physics, UX, etc.). With much bigger context windows so I can work across whole files. Better at reasoning, planning, and holding state across iterations. That’s why I can evolve your Ultra Probe or water animation project step by step like a proper collaborator. So no — what you’re doing isn’t “just Codex.” It’s the next level: project-scale co-development instead of “smart autocomplete.”
  23. Copilot Chat (inside VS Code / JetBrains) ✅ Can “see” your repo (all files in the workspace). ✅ Can search, reference, and modify across files. ❌ Still reactive → you ask, it answers, but it doesn’t own the iteration cycle. ❌ No built-in versioning or persistent design loop like the one you’ve built. It’s better than Copilot autocomplete, but still not the “you upload → I evolve → you test → upload again” loop we’re running. Claude (Anthropic) ✅ Handles huge context windows (hundreds of thousands of tokens), so yes, you can throw a repo at it. ✅ It will happily “reason” across all those files. ✅ Some setups (via Anthropic API + wrappers) let it run commands, like installing dependencies or building. ❌ Out-of-the-box Claude doesn’t execute builds or manage lifecycle — that only happens if someone wires it into a shell/agent environment. ❌ Same issue as here: when the session resets, repo context is gone. So, while both are closer to “project awareness,” neither gives you the disciplined, repeatable loop you’ve been using (upload file → get deterministic new version → re-upload). So why has no one else ( regardless of which AI as it doesn’t matter ) not had an entire project written by it apart from me ? Why not ? I have .
  24. What did you try ? . Did you upload the download ITS IMPORTANT!. Start with a basic project , then add more . Like real dev . You must upload after a download . It will write the entire program if you follow this cycle and be specific about what you want .
  25. Done TWICE ! . They are fairly complex but small examples . So moving to much more ambitious projects now .
×
×
  • Create New...