Our previous post was perhaps a bit too earnest, in retrospect. At Digit, we’ve been heavy students and users of a wide range of AI tools for months but, up until now, we’ve never tried AI as a “substitute” for one of our core tasks: writing advanced HTML and CSS.
Having heard so many amazing stories from our developer colleagues, we had begun to believe that “the robot” could code pretty much any application. Until we tried it ourselves.
We were about to start working on a single-page microsite and decided to give “Chatty” (GPT-4) a shot at greatness. And sure enough, it is quite magical to see it type full CSS rules and HTML attributes — the transformative moment we had experienced before while auto-generating gorgeous illustrations with Midjourney or pretty great paragraphs with, say, Bard.
But just like Midjourney has a mind of its own (and a stubborn and opinionated one at that) and Bard’s copy invariably needs to be edited to sound real, Chatty’s front-end code gets you far but not quite to production-ready stuff. And so, you are left in this no-man’s land because, mind you, to get its code for the microsite to be even mildly usable, we spent a full day trying different prompts, chunking the task, correcting, and reshaping.
So we had invested so much time and effort in the Chatty experiment that doing it ourselves from scratch was an unappealing prospect to say the least and, yet, using Chatty’s code was out of the question. We could fix it, but then would we actually be any faster than if we architected the code ourselves from the beginning? Quite the conundrum.
We want to keep this post light, not like one of those long step-by-step articles with code snippets. But we would at least want to share with you the lessons we’ve learned in the process:
- We have a ChatGPT Plus account (I think) but, regardless, the robot could never fit a full HTML file and a full CSS file into its responses to a single prompt. At times, the UI would let us get it to complete the text, while other times it would just restart writing the entire set (while normally finishing the full text on these second tries). So, we resorted to using just a couple of words as markers for the type of text within each component…with the goal of entering the full text later-on in our IDE.
- I don’t think there’s a way to avoid building a long page by blocks, which invariably introduces redundant and inefficient CSS. That said, if you could reduce your directions to the simplest set of rules possible then you might be able to use one prompt for the entire page.
- If you have a precise idea of the layout and content of your web page(s), you should be extremely precise in how you structure and explain your prompt…almost like pseudocode. Well, not quite THAT detailed, but close. Go from general rules (the site needs to be responsive at the following breakpoints, use HTML and CSS, coding best practices and conventions, etc.) to the specific (for this component, do this layout). Also, consider using technical terms to get the closest results to your vision as you can: flexbox or grid? Padding or margin?
- If you are open to seeing what kinds of layouts Chatty comes up with, then you should not use technical terms but specific English words.
- Be prepared to get frustrated as Chatty follows some of your directives and, seemingly randomly, chooses to ignore portions of your prompt.
We may have been doing this all wrong. It was our first try, after all. If that’s the case, please let us know where we might have gone off the rails.
Ultimately, the entire future of AI and each developer’s individual bets on it, depends on how we answer this question for ourselves: do we believe the LLM models are going to get qualitatively better again just like they did at the end of last year compared to where they had been stuck for decades? Because if they don’t, I’m far from certain that AI will be getting more than a modicum set of tasks to perform in (at least) professional front-end development.
Perhaps we’ll try again with some back-end tasks to see if it does any better.