How to State The Confidence of Accuracy/Inaccuracy?

Consider that I have a dataset automatically acquired by a machine that returns the following measurements:

[111, 121, 114, 154, 149, 150]

I then go and manually check how these values received by the machine compare to the true values, and I get the following measurements when checking manually:

[112, 121, 114, 154, 149, 149]

As you can see, the datasets differ in two places (I measured 112 where the machine saw 111 and I measured 149 where the machine saw 150), meaning the machine is inaccurate.

With that, what's the proper way to state and calculate how confident I am in how accurate or inaccurate the machine is? I could obviously say it was wrong 2/6 times (33% inaccurate/66% accurate) but I wasn't sure if there's a better way to represent this, especially with a larger dataset than the one I've listed for the example.

As a follow-up question, how could I compare these confidence or accuracy levels? For example, if this machine is usually 94% accurate but was recently improved to be 98% accurate, how would I state that apart from it's 4% more accurate?

Topic confidence accuracy

Category Data Science


EDIT I’ll leave this up for true classification tasks, but I agree with the comment that the OP does not describe a classification task.

You could phrase it in terms of error rate.

You start out with a success rate of $94\%$, so an error rate of $6\%$.

Now you have a success rate of $98\%$, so an error rate of $2\%$.

You reduced the error rate by $1-\dfrac{2}{6}\approx 67\%$.

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.