This is my experience with AI. It is intended to help myself and everyone who has questions about whether AI will take over jobs.
Frankly, it doesn't open a debate for me - computing has been taking over jobs for decades and, simultaneously, created tons of new jobs, where people from core domains, even doctors, have left their jobs and switched to software - because it pays well.
The AI Consultant Experience
Coding with AI has become an unavoidable part of my daily job - of course, who wouldn't want a consultant intelligence at keystrokes away? AI subscription prices vary widely - some are free with basic features, while paid tiers range from $10-50+ monthly, depending on capabilities and usage limits. That's why I subscribe to all the popular intelligent beings - Claude, Copilot, ChatGPT - and Deepseek, Cursor, and Cline on Open Router. Sometimes, I feel it's a good distraction from solving the problem myself, and sometimes, it merely satisfies my "fear of missing out" on some amazing solution this intelligent being might spell out (the tone of sarcasm is accurate).
As I have been doing that, I slowly began to understand that - though you can consult with AI on everything you are working on, it's like quicksand - it produces an overwhelming amount of content in seconds, be it the amount of code (I can't type that fast) or amount of knowledge. So, it gives a feeling that one must be a fool not to use it and rely on one's intelligence.
The FOMO and Sunk Cost Problem
So it's FOMO that first pulls you into its world, and once you start solving with AI, the next step is continuously investing in it to get it to solve the problem. This is the sunk cost fallacy - continuing a behavior based on previously invested resources. It won't solve the problem if you don't realize it's many hours gone.
Let's say you rely on it from the start; it's fine as long as it does things right, but if it makes a mistake (which it does more often than not—if you are even equipped with knowledge and it's outdated), it takes hours to debug. Many times, I give up.
When you start solving with AI, you must first hold your breath and believe that it might not be capable of solving it today. And in some odd, weird scenario, you are wiser than AI to know when to stop going down the black hole. (The only savior is AI hitting the limit). But you won't hit the limit if you use AI via APIs like Cline. Every problem is not AI's cup of tea (at least not in the way you think).
The Real Power of AI
The real power of AI is not yet in just these names but in "prompt engineering" - that's why files like cursor rules and copilot instructions exist. The subsequent rise is in "taming" AI (but I would continue calling it taming LLMs). Attributing every failure and success to AI entirely seems unfair - LLMs work in their way and let them.
Would I continue using LLMs?
Hell yeah. It's a continuous learning process - to use AI, first, you must understand yourself and how you think. You need to give it the bits and pieces of the puzzle you want to solve and make it solve those bits.
For example - if I want a class or function that is supposed to perform a depth-first search on my data structure, I do need to take time to solve it myself - think of it as "setting the direction" - and if you don't set it, it will set and take you down that hole. It doesn't matter if you have time - but it's better to be careful when you don't have time. Once you set the direction, you take it down the path you want, not in the path it takes. Make it generate those things that it can do at an incredible pace.
Continuing the example - it can write the above code faster than me and less buggy than me, only if I am patient enough to drive it. It's traditionally called the primary-replica model (formerly master-slave); in this case, the secondary instance doesn't have a negative connotation. Moreover, you would hate it if your database doesn't have replicas or multiple replicas for redundancy and performance.