Have you ever run into false positives from a static source code analysis tool? What’s the best way to identify those so devs can focus on fixing real issues?

393 viewscircle icon1 Upvotecircle icon2 Comments
Sort by:
Chief Techical Officer in Softwarea year ago

The best way is for the dev to review the finding to work out if it is a false positive and discuss with their team as to whether to mark as such or restructure the code so that it doesn't get flagged in the future. Even false positives can flag a code weakness which should be investigated. There will always be a low percentage of false positives, that is just life.

Senior Data Scientist in Services (non-Government)a year ago

A.t.m.h.o. the question itself is misleading. Classical statisic tests assume a probability to be wrong (" probsbility of error"). That's why one rejects a hypothesis if the probability drops below a certain threshold (most often 5%). That probability covers either the possibility, that the hypothesis is correct, but the data sample is not representative or the hypothesis is wrong, but the data sample suggests otherwise (like studies published, saying red wine or coffee are supporting one's health).

Content you might like

Waterfall12%

Prototype18%

Rapid Application Development7%

Agile Scrum46%

Agile Kanban8%

Dynamic System Development1%

Lean Software Development2%

Other .. please add it down2%

View Results

Strongly agree12%

Somewhat agree55%

Neither agree nor disagree16%

Somewhat disagree14%

Strongly disagree1%

View Results