1. What does implementation completeness mean ? It would be awesome if you can define that for Scenario 3.
2. What do you mean by extracting deep agent's patterns ? I have faint idea. It would be good if you can define.
3. It would be great if you can add your own nuanced engineering perspective on final results produced ?
"""
I spent hours with Claude Code, Cursor, and my own engineering eyes evaluating which structure is best and which implementation is the most robust.
"""
4. [noob question] One of biggeste takeaway for me is : "Deepagent creates compliant app". Are we going by their claims or there is a standard way to check compliance ?
Are you suggesting implementing something like audit logging makes it compliant ? Would you consider that as pattern that can be extracted out ?
Fantastic questions, and glad you enjoyed the article.
1. In Scenario 3 DeepAgent produced full working app with a database integrated and an interface to interact with the database as well as functioning backend and frontend. Claude (in 1h) generated a very strong architecturally backend with prometheus monitoring and Docker-ready deployment, but the frontend was lacking, and of course there was no completely integrated DB unlike DeepAgent
2. I meant to use the things that worked well for them, like multi-agent usage, iterative inner-agent workflow, no going-back function, and a turnkey deployment with a focus on database compatibility.
3. As I mentioned above, its pretty impressive what all of the tools achieved in my purposefully-time-limited 1h interval I gave to each. I left all the results in the repo attached, but ultimately my engineering take is in how I evaluated the scalability of the apps build.
I made a point to look at architecture, data management, cache management, authentication implementation (where relevant), readability, documentation, UX, and GUI provided by each of the workflows with my engineer-eyes, the way I would look at someone's project handed off to me to improve further.
4. Here I admit I did not do any testing and just took their word for it as I doubted they would lie and, more importantly, I have no expertise in this area to really validate anything beyond simple research of the very limited secondary sources.
Good read.
Please add official website for deep agent to help readers :
https://deepagent.abacus.ai/
Context section is helpful.
Few questions :
1. What does implementation completeness mean ? It would be awesome if you can define that for Scenario 3.
2. What do you mean by extracting deep agent's patterns ? I have faint idea. It would be good if you can define.
3. It would be great if you can add your own nuanced engineering perspective on final results produced ?
"""
I spent hours with Claude Code, Cursor, and my own engineering eyes evaluating which structure is best and which implementation is the most robust.
"""
4. [noob question] One of biggeste takeaway for me is : "Deepagent creates compliant app". Are we going by their claims or there is a standard way to check compliance ?
Are you suggesting implementing something like audit logging makes it compliant ? Would you consider that as pattern that can be extracted out ?
Fantastic questions, and glad you enjoyed the article.
1. In Scenario 3 DeepAgent produced full working app with a database integrated and an interface to interact with the database as well as functioning backend and frontend. Claude (in 1h) generated a very strong architecturally backend with prometheus monitoring and Docker-ready deployment, but the frontend was lacking, and of course there was no completely integrated DB unlike DeepAgent
2. I meant to use the things that worked well for them, like multi-agent usage, iterative inner-agent workflow, no going-back function, and a turnkey deployment with a focus on database compatibility.
3. As I mentioned above, its pretty impressive what all of the tools achieved in my purposefully-time-limited 1h interval I gave to each. I left all the results in the repo attached, but ultimately my engineering take is in how I evaluated the scalability of the apps build.
I made a point to look at architecture, data management, cache management, authentication implementation (where relevant), readability, documentation, UX, and GUI provided by each of the workflows with my engineer-eyes, the way I would look at someone's project handed off to me to improve further.
4. Here I admit I did not do any testing and just took their word for it as I doubted they would lie and, more importantly, I have no expertise in this area to really validate anything beyond simple research of the very limited secondary sources.