I'm interested in understanding how organisations manage increasing support overhead from running the databricks analytical platform.  This is being driven by a number of factors: - wide range of user personas (data engineers, data scientists, analysts) - differing maturity levels (some people needing high levels of support because they have low familiarity with data and analytics and databricks; others with advanced understanding) - increasing capability (Databricks are deploying new features frequently) What tactics have people found successful to deal with this?

131 viewscircle icon2 Upvotescircle icon2 Comments
Sort by:
Director of Data and Analyticsa day ago

To effectively manage Databricks support and enablement at scale, organizations can establish a cross-functional agile team with clearly defined roles such as product owner, platform engineers, enablement specialists, and support engineers. A structured rotational plan ensures continuous cross-training, knowledge sharing, and skill development, reducing dependency on individuals. Continuous collaboration with Databricks is embedded through roadmap reviews, joint workshops, and use of solution accelerators, ensuring rapid adoption of new features and alignment with business priorities. This model provides clear accountability within the team, continuous cross-training to handle complexity, scalable support through automation and enablement, and a strong vendor partnership to accelerate value.

Sr. Manager in Banking4 days ago

Managing databricks in a large financial organization is a complex task because it supports many types of users with different skills and needs—from data engineers building pipelines to data scientists creating AI models, and analysts looking for insights. Some users are very familiar with data and analytics, while others need more help. Plus, Databricks frequently adds new features, especially in AI and data science, which means teams must continuously learn to keep up.

The best way to handle this complexity is to build a strong, dedicated expert team focused on Databricks, cloud data engineering, and AI capabilities. This team can optimize data pipelines, manage AI workloads, and help less-experienced users get the most out of the platform.

Matching tools and training to user skill levels is crucial. Analysts should get easy-to-use access to trusted, curated data, while data scientists can leverage advanced AI and machine learning tools within Databricks to build predictive models and automate decision-making. Automating routine tasks like setting up environments and managing access reduces manual support effort and speeds innovation.

Controlling costs remains a top priority. Setting policies on compute usage—especially for resource-intensive AI jobs—monitoring spending, and communicating budgets clearly to teams helps avoid surprises and keeps operations running smoothly. Integrating Databricks with familiar BI tools like Tableau or Power BI also encourages collaboration across teams without adding complexity.

Regular training programs and a culture of sharing best practices help everyone stay current with platform updates and the latest AI and data science techniques. The goal is to deliver a powerful, user-friendly system that drives innovation and business value while maintaining strong governance and compliance with financial regulations.

Managing Databricks well means balancing technology, data science expertise, people skills, and clear processes to maximize value and reduce overhead, unlocking the true potential of AI in financial services.

Content you might like

Yes68%

No31%

Mastodon8%

Bluesky30%

Substack Notes15%

Truth Social16%

Hive3%

Post2%

T24%

None of them…21%

Something else (tell me what it is in a comment below)1%

View Results