Have you ever run into false positives from a static source code analysis tool? What’s the best way to identify those so devs can focus on fixing real issues?

393 viewscircle icon1 Upvotecircle icon2 Comments
Sort by:
Chief Techical Officer in Softwarea year ago

The best way is for the dev to review the finding to work out if it is a false positive and discuss with their team as to whether to mark as such or restructure the code so that it doesn't get flagged in the future. Even false positives can flag a code weakness which should be investigated. There will always be a low percentage of false positives, that is just life.

Senior Data Scientist in Miscellaneousa year ago

A.t.m.h.o. the question itself is misleading. Classical statisic tests assume a probability to be wrong (" probsbility of error"). That's why one rejects a hypothesis if the probability drops below a certain threshold (most often 5%). That probability covers either the possibility, that the hypothesis is correct, but the data sample is not representative or the hypothesis is wrong, but the data sample suggests otherwise (like studies published, saying red wine or coffee are supporting one's health).

Content you might like

Cost of RPA products24%

Lack of developers who can code RPA applications43%

Amount of customization needed to automate business processes27%

Lack of RPA code maintenance resources4%

View Results

Currently satisfied with our level of test automation20%

Plan to start an automation project in the next 1-6 months54%

Plan to start an automation project in the next 6-12 months16%

Plan to start an automation project in 13 or more months5%

Don't know3%

View Results