How is your org thinking about prompt engineering as a skill? Is this a must-have when building products with integrated AI capabilities?

809 viewscircle icon2 Upvotescircle icon6 Comments
Sort by:
VP of Data and Analytics19 days ago

Question to ponder is, which individual should have prompt engineering skills. I believe it is not required for somebody who is SME or Product owner/lead. With product knowledge one can ask appropriate deep level precision questions to arrive at satisfactory responses without further prompts. One may argue Tech person has better idea to ask question, but may cover only tech area where he/she has expertise and probably fall into bias inadvertently. Probably, prompt engineering is an essential skill for the audience who doesn't have in-depth knowledge in the area where he/she is working. 

Director – Cybersecurity (IAM)19 days ago

Prompt engineering had its initial fame, but models are now smart enough to work with plain language. The value lies in clear thinking, not clever phrasing. Context is king. If I explain the goal and the business need, the model meets me more than halfway.
Also, GitHub is packed with sample prompts we tweak like code snippets.
Nowadays many AI tools I use like Cline have a plan then act flow. I brainstorm in everyday English, check the plan, and let the tool act and write code or do the intended job. No special course needed. Future agents will draft their own prompts in the workflow, trimming the human part even more.

Director of IT Governance in Finance (non-banking)22 days ago

Must have.  I have business analysts that have 'converted' to prompt engineers.  After some time they get very good at it.  The interesting part to me is how as different use cases roll in that require different outputs from the LLM, their skillset has to adapt.   As an example, we consume a lot of output from prompts as visualizations.  We do feature extraction to get the data for visualization.  Some of the features are complex and may reveal themselves as objects or hierarchical.  Who was a business analyst with domain expertise is now fluent in JSON because that was the best way to prompt the LLM to output it's data in a way meaningful to the visualization.... I think it will evolve, still - but that's where we are on the journey.

Director of Engineeringa month ago

There is definitely a place for this and I’ve been emphasizing this with my team and sharing aspects of good prompt engineering a d working examples. The use for this isn’t limited to building products. It’s important to have this skill for team members to know how to leverage Gen AI as a partner to increase their efficiency and effectiveness.

VP of Engineering in Manufacturinga month ago

In my experience a broader view of prompt engineering is required.   Specifically consider most LLM's interacting with people to be the mildly disinterested expert.   You know that person at the office who is the know-it-all with poor social skills and is difficult to talk to.  When you ask them how to promote a file in GIT they will tell you how to promote to main, and then relish in watching you hose everyone by not going through the proper subbranch because you didn't ask about promoting to the subbranches.

LLMs can be very similar.   They require careful questioning, collaborative reasoning, and often still produce output that is not very usable (I had one hallucinate a very reasonable API call to an open source package that didn't' exist...Although after seeing what it generated I wish it did.  It would have been helpful).

But prompt engineering is also important in agentic frameworks where you are essentially structuring SW to do prompt re-writing and aligning sub-requests to specialized LLMs against specific data sets.

You may also adjust the prompt knowing that a particular subject domain is difficult to access, or a particular document is challenging to summarize as part of a RAG pipeline.  In fact, we realized that our source documentation was substandard, and created a custom chunking parser to make sure we had proper context.  But it was through understanding how the prompt failed to get the desired results that led us to make this modification.

It's often better to string together small requests that build into a larger one, rather than trying to do it all in a single prompt.   Or use one LLM to help craft a prompt for another LLM.

I find "testing your understanding" to be an excellent technique when going back and forth with the LLM.  Ask a question, get an answer, restate the answer with some clarifying examples, and have the LLM confirm (or not ) your understanding.

The worst scenario to be in is not knowing what question to ask.   I find this is often places where less skilled staff struggle.  They ask the simple question, don't realize they got the simple answer, then cut&paste.   This is not a path to success.  And while some training may help, the issue usually is lack of critical thinking skills on behalf of the human.

Content you might like

Yes83%

Somewhat11%

No5%

View Results

% of tests executed/test coverage30%

% of requirements covered by testing/code coverage43%

% of total tests passed45%

% of critical tests passed38%

% of critical business flows passed29%

Project deadline is reached21%

Project budget is reached14%

Minimum acceptable defect rate is achieved12%

Go/No-Go meeting11%

Other1%

View Results