How to Get a Scanner to Read in a List

Figure 14: Recognizing bubble sheet exams using computer vision.

Over the by few months I've gotten quite the number of requests landing in my inbox to build a chimera canvass/Scantron-like exam reader using computer vision and image processing techniques.

And while I've been having a lot of fun doing this series on motorcar learning and deep learning, I'd be lying if I said this trivial mini-projection wasn't a brusque, welcome break. 1 of my favorite parts of running the PyImageSearch weblog is demonstrating how to build actual solutions to problems using computer vision.

In fact, what makes this projection then special is that nosotros are going to combine the techniques from many previous weblog posts, including building a certificate scanner, profile sorting, and perspective transforms. Using the knowledge gained from these previous posts, we'll be able to make quick work of this chimera sheet scanner and examination grader.

You see, terminal Friday afternoon I quickly Photoshopped an example chimera test paper, printed out a few copies, so set to piece of work on coding upwards the bodily implementation.

Overall, I am quite pleased with this implementation and I recall you'll admittedly be able to use this bubble canvass grader/OMR arrangement every bit a starting betoken for your ain projects.

To learn more about utilizing reckoner vision, image processing, and OpenCV to automatically grade bubble test sheets, keep reading.

Looking for the source code to this post?

Jump Right To The Downloads Section

Chimera sheet scanner and examination grader using OMR, Python, and OpenCV

In the balance of this blog postal service, I'll talk over what exactly Optical Mark Recognition (OMR) is. I'll and then demonstrate how to implement a bubble sheet test scanner and grader using strictly reckoner vision and paradigm processing techniques, along with the OpenCV library.

Once we have our OMR system implemented, I'll provide sample results of our examination grader on a few instance exams, including ones that were filled out with nefarious intent.

Finally, I'll talk over some of the shortcomings of this current chimera sheet scanner system and how we can meliorate it in future iterations.

What is Optical Marker Recognition (OMR)?

Optical Mark Recognition, or OMR for curt, is the process of automatically analyzing homo-marked documents and interpreting their results.

Arguably, the most famous, easily recognizable form of OMR are bubble sheet multiple choice tests , not dissimilar the ones yous took in simple school, middle school, or even high schoolhouse.

If you're unfamiliar with "bubble canvass tests" or the trademark/corporate name of "Scantron tests", they are simply multiple-choice tests that you take as a educatee. Each question on the test is a multiple selection — and you lot use a #2 pencil to mark the "bubble" that corresponds to the correct respond.

The well-nigh notable bubble sheet exam you lot experienced (at to the lowest degree in the United States) were taking the SATs during high school, prior to filling out college admission applications.

I believe that the SATs use the software provided by Scantron to perform OMR and grade student exams, but I could easily be wrong at that place. I merely make note of this because Scantron is used in over 98% of all US school districts.

In short, what I'm trying to say is that there is a massive marketplace for Optical Marking Recognition and the ability to grade and translate human-marked forms and exams.

Implementing a bubble sheet scanner and grader using OMR, Python, and OpenCV

Now that nosotros understand the nuts of OMR, permit'southward build a computer vision organisation using Python and OpenCV that can read and grade bubble canvass tests.

Of grade, I'll be providing lots of visual case images along the mode and so you tin can understand exactly what techniques I'm applying and why I'yard using them.

Below I have included an example filled in bubble canvas exam that I accept put together for this project:

Figure 1: The example, filled in bubble sheet we are going to use when developing our test scanner software.
Figure ane: The example, filled in bubble sail nosotros are going to use when developing our test scanner software.

We'll be using this as our instance image as we work through the steps of edifice our test grader. Afterwards in this lesson, you'll too find additional sample exams.

I have also included a blank exam template as a .PSD (Photoshop) file and then you tin change information technology equally yous see fit. You tin can use the "Downloads" section at the lesser of this post to download the code, instance images, and template file.

The 7 steps to build a bubble sheet scanner and grader

The goal of this blog post is to build a bubble sheet scanner and examination grader using Python and OpenCV.

To accomplish this, our implementation will need to satisfy the post-obit vii steps:

  • Stride #ane: Discover the exam in an epitome.
  • Step #2: Apply a perspective transform to extract the top-down, birds-heart-view of the exam.
  • Step #three: Extract the set of bubbles (i.e., the possible answer choices) from the perspective transformed test.
  • Footstep #iv: Sort the questions/bubbles into rows.
  • Step #5: Determine the marked (i.due east., "bubbled in") answer for each row.
  • Step #half-dozen: Lookup the correct answer in our respond central to determine if the user was correct in their choice.
  • Step #seven: Repeat for all questions in the test.

The next section of this tutorial volition cover the bodily implementation of our algorithm.

The bubble sheet scanner implementation with Python and OpenCV

To get started, open upwardly a new file, proper noun information technology test_grader.py , and let's go to work:

# import the necessary packages from imutils.perspective import four_point_transform from imutils import contours import numpy every bit np import argparse import imutils import cv2  # construct the argument parse and parse the arguments ap = argparse.ArgumentParser() ap.add_argument("-i", "--image", required=True, 	help="path to the input image") args = vars(ap.parse_args())  # define the answer key which maps the question number # to the correct answer ANSWER_KEY = {0: 1, 1: iv, 2: 0, 3: three, 4: 1}          

On Lines ii-seven we import our required Python packages.

Y'all should already have OpenCV and Numpy installed on your system, but you might not have the most recent version of imutils, my set of convenience functions to make performing basic image processing operations easier. To install imutils (or upgrade to the latest version), just execute the following command:

$ pip install --upgrade imutils          

Lines ten-12 parse our command line arguments. We only need a unmarried switch here, --image , which is the path to the input bubble sheet test epitome that we are going to course for definiteness.

Line 17 then defines our ANSWER_KEY .

Equally the name of the variable suggests, the ANSWER_KEY provides integer mappings of the question numbers to the index of the correct bubble.

In this case, a key of 0 indicates the starting time question, while a value of i signifies "B" equally the right respond (since "B" is the index one in the string "ABCDE"). Equally a 2nd example, consider a key of 1 that maps to a value of four — this would signal that the respond to the second question is "E".

As a matter of convenience, I have written the entire answer key in obviously english language hither:

  • Question #i: B
  • Question #ii: E
  • Question #iii: A
  • Question #4: D
  • Question #five: B

Next, let'southward preprocess our input image:

# load the image, catechumen information technology to grayscale, blur it # slightly, and so find edges image = cv2.imread(args["paradigm"]) gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) blurred = cv2.GaussianBlur(grey, (5, 5), 0) edged = cv2.Canny(blurred, 75, 200)          

On Line 21 nosotros load our paradigm from disk, followed past converting it to grayscale (Line 22), and blurring it to reduce high frequency racket (Line 23).

Nosotros then use the Canny edge detector on Line 24 to detect the edges/outlines of the exam.

Beneath I have included a screenshot of our exam later applying edge detection:

Figure 2: Applying edge detection to our exam neatly reveals the outlines of the paper.
Effigy 2: Applying edge detection to our examination neatly reveals the outlines of the paper.

Notice how the edges of the document are conspicuously divers, with all four vertices of the test beingness nowadays in the image.

Obtaining this silhouette of the certificate is extremely important in our side by side step as nosotros will employ it every bit a marking to apply a perspective transform to the exam, obtaining a summit-down, birds-eye-view of the document:

# notice contours in the border map, then initialize # the contour that corresponds to the certificate cnts = cv2.findContours(edged.re-create(), cv2.RETR_EXTERNAL, 	cv2.CHAIN_APPROX_SIMPLE) cnts = imutils.grab_contours(cnts) docCnt = None  # ensure that at to the lowest degree 1 profile was constitute if len(cnts) > 0: 	# sort the contours co-ordinate to their size in 	# descending guild 	cnts = sorted(cnts, key=cv2.contourArea, reverse=True)  	# loop over the sorted contours 	for c in cnts: 		# approximate the contour 		peri = cv2.arcLength(c, True) 		approx = cv2.approxPolyDP(c, 0.02 * peri, Truthful)  		# if our approximated contour has four points, 		# then nosotros can assume we have found the paper 		if len(approx) == 4: 			docCnt = approx 			break          

Now that we accept the outline of our exam, we utilize the cv2.findContours part to notice the lines that correspond to the exam itself.

Nosotros do this past sorting our contours by their area (from largest to smallest) on Line 37 (subsequently making sure at least 1 contour was plant on Line 34, of course). This implies that larger contours will be placed at the front end of the list, while smaller contours will announced farther back in the list.

We make the assumption that our exam volition be the master focal point of the paradigm, and thus be larger than other objects in the image. This assumption allows us to "filter" our contours, but by investigating their area and knowing that the contour that corresponds to the exam should be nigh the front of the listing.

Notwithstanding, contour area and size is not enough — we should as well check the number of vertices on the contour.

To do, this, we loop over each of our (sorted) contours on Line 40. For each of them, we approximate the contour, which in essence means we simplify the number of points in the contour, making it a "more bones" geometric shape. You can read more about contour approximation in this post on edifice a mobile document scanner.

On Line 47 we brand a check to see if our approximated profile has four points, and if it does, nosotros assume that we have found the exam.

Below I take included an case image that demonstrates the docCnt variable being drawn on the original image:

Figure 3: An example of drawing the contour associated with the exam on our original image, indicating that we have successfully found the exam.
Figure three: An example of drawing the contour associated with the exam on our original epitome, indicating that we have successfully found the exam.

Sure plenty, this area corresponds to the outline of the exam.

Now that we have used contours to find the outline of the exam, we can apply a perspective transform to obtain a pinnacle-down, birds-eye-view of the document:

# utilise a 4 indicate perspective transform to both the # original epitome and grayscale image to obtain a top-down # birds heart view of the paper newspaper = four_point_transform(image, docCnt.reshape(four, ii)) warped = four_point_transform(gray, docCnt.reshape(iv, 2))          

In this instance, we'll be using my implementation of the four_point_transform role which:

  1. Orders the (x, y)-coordinates of our contours in a specific, reproducible manner.
  2. Applies a perspective transform to the region.

Yous tin can learn more about the perspective transform in this postal service every bit well as this updated 1 on coordinate ordering, but for the time being, only understand that this office handles taking the "skewed" exam and transforms it, returning a superlative-down view of the document:

Figure 4: Obtaining a top-down, birds-eye view of both the original image along with the grayscale version.
Figure 4: Obtaining a summit-down, birds-eye view of both the original epitome (left) along with the grayscale version (correct).

Alright, so now we're getting somewhere.

We constitute our exam in the original paradigm.

Nosotros applied a perspective transform to obtain a 90 degree viewing angle of the certificate.

But how do we get about really grading the document?

This step starts with binarization, or the procedure of thresholding/segmenting the foreground from the background of the image:

# utilize Otsu's thresholding method to binarize the warped # piece of paper thresh = cv2.threshold(warped, 0, 255, 	cv2.THRESH_BINARY_INV | cv2.THRESH_OTSU)[1]          

After applying Otsu'due south thresholding method, our test is now a binary image:

Figure 5: Using Otsu's thresholding allows us to segment the foreground from the background of the image.
Figure 5: Using Otsu's thresholding allows us to segment the foreground from the background of the image.

Notice how the background of the image is black, while the foreground is white.

This binarization volition allow us to over again apply contour extraction techniques to discover each of the bubbling in the exam:

# detect contours in the thresholded paradigm, then initialize # the list of contours that correspond to questions cnts = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL, 	cv2.CHAIN_APPROX_SIMPLE) cnts = imutils.grab_contours(cnts) questionCnts = []  # loop over the contours for c in cnts: 	# compute the bounding box of the contour, then utilize the 	# bounding box to derive the aspect ratio 	(x, y, w, h) = cv2.boundingRect(c) 	ar = westward / float(h)  	# in order to label the contour as a question, region 	# should be sufficiently wide, sufficiently alpine, and 	# have an aspect ratio approximately equal to 1 	if westward >= xx and h >= 20 and ar >= 0.9 and ar <= 1.1: 		questionCnts.suspend(c)          

Lines 64-67 handle finding contours on our thresh binary image, followed by initializing questionCnts , a list of contours that correspond to the questions/bubbles on the exam.

To determine which regions of the image are bubbles, nosotros outset loop over each of the individual contours (Line 70).

For each of these contours, we compute the bounding box (Line 73), which too allows u.s. to compute the aspect ratio, or more than simply, the ratio of the width to the superlative (Line 74).

In order for a contour area to be considered a bubble, the region should:

  1. Be sufficiently wide and alpine (in this case, at least 20 pixels in both dimensions).
  2. Take an attribute ratio that is approximately equal to 1.

Every bit long as these checks concord, we tin update our questionCnts listing and mark the region as a bubble.

Below I accept included a screenshot that has drawn the output of questionCnts on our image:

Figure 6: Using contour filtering allows us to find all the question bubbles in our bubble sheet exam recognition software.
Figure 6: Using contour filtering allows usa to notice all the question bubbles in our chimera sheet exam recognition software.

Notice how only the question regions of the exam are highlighted and nothing else.

Nosotros can now move on to the "grading" portion of our OMR system:

# sort the question contours tiptop-to-bottom, and so initialize # the full number of correct answers questionCnts = contours.sort_contours(questionCnts, 	method="pinnacle-to-bottom")[0] correct = 0  # each question has 5 possible answers, to loop over the # question in batches of 5 for (q, i) in enumerate(np.arange(0, len(questionCnts), 5)): 	# sort the contours for the electric current question from 	# left to right, then initialize the alphabetize of the 	# bubbled answer 	cnts = contours.sort_contours(questionCnts[i:i + five])[0] 	bubbled = None          

First, nosotros must sort our questionCnts from top-to-bottom. This volition ensure that rows of questions that are closer to the peak of the exam will announced start in the sorted list.

We likewise initialize a bookkeeper variable to go on track of the number of correct answers.

On Line 90 nosotros start looping over our questions. Since each question has v possible answers, nosotros'll employ NumPy array slicing and profile sorting to to sort the current fix of contours from left to correct.

The reason this methodology works is considering we have already sorted our contours from tiptop-to-bottom. We know that the 5 bubbles for each question will appear sequentially in our list — only we do not know whether these bubbles will be sorted from left-to-right. The sort contour call on Line 94 takes care of this effect and ensures each row of contours are sorted into rows, from left-to-right.

To visualize this concept, I have included a screenshot beneath that depicts each row of questions as a dissever colour:

Figure 7: By sorting our contours from top-to-bottom, followed by left-to-right, we can extract each row of bubbles. Therefore, each row is equal to the bubbles for one question.
Effigy 7: By sorting our contours from superlative-to-lesser, followed past left-to-right, nosotros tin extract each row of bubbles. Therefore, each row is equal to the bubbles for 1 question.

Given a row of bubbles, the next step is to determine which bubble is filled in.

Nosotros can attain this past using our thresh prototype and counting the number of non-cipher pixels (i.e., foreground pixels) in each bubble region:

            # loop over the sorted contours 	for (j, c) in enumerate(cnts): 		# construct a mask that reveals only the current 		# "bubble" for the question 		mask = np.zeros(thresh.shape, dtype="uint8") 		cv2.drawContours(mask, [c], -1, 255, -1)  		# apply the mask to the thresholded image, then 		# count the number of non-nothing pixels in the 		# chimera area 		mask = cv2.bitwise_and(thresh, thresh, mask=mask) 		total = cv2.countNonZero(mask)  		# if the electric current total has a larger number of total 		# non-zero pixels, then nosotros are examining the currently 		# bubbled-in answer 		if bubbled is None or total > bubbled[0]: 			bubbled = (total, j)          

Line 98 handles looping over each of the sorted bubbling in the row.

We then construct a mask for the current bubble on Line 101 and then count the number of non-zippo pixels in the masked region (Lines 107 and 108). The more than non-zero pixels we count, and then the more foreground pixels at that place are, and therefore the bubble with the maximum non-zero count is the index of the chimera that the the test taker has bubbled in (Line 113 and 114).

Beneath I have included an example of creating and applying a mask to each chimera associated with a question:

Figure 8: An example of constructing a mask for each bubble in a row.
Figure 8: An example of constructing a mask for each bubble in a row.

Clearly, the chimera associated with "B" has the most thresholded pixels, and is therefore the bubble that the user has marked on their exam.

This next code block handles looking upwards the correct answer in the ANSWER_KEY , updating any relevant bookkeeper variables, and finally drawing the marked bubble on our image:

            # initialize the contour color and the index of the 	# *correct* reply 	colour = (0, 0, 255) 	k = ANSWER_KEY[q]  	# cheque to meet if the bubbled answer is correct 	if g == bubbled[1]: 		color = (0, 255, 0) 		correct += 1  	# depict the outline of the correct answer on the examination 	cv2.drawContours(newspaper, [cnts[k]], -1, color, 3)          

Based on whether the examination taker was correct or incorrect yields which color is drawn on the exam. If the examination taker is correct, we'll highlight their answer in green. All the same, if the examination taker made a mistake and marked an incorrect answer, we'll permit them know by highlighting the correct answer in red:

Figure 9: Drawing a "green" circle to mark "correct" or a "red" circle to mark "incorrect".
Figure 9: Drawing a "green" circle to mark "correct" or a "red" circle to mark "incorrect".

Finally, our last code block handles scoring the examination and displaying the results to our screen:

# grab the test taker score = (correct / five.0) * 100 print("[INFO] score: {:.2f}%".format(score)) cv2.putText(paper, "{:.2f}%".format(score), (10, 30), 	cv2.FONT_HERSHEY_SIMPLEX, 0.ix, (0, 0, 255), 2) cv2.imshow("Original", image) cv2.imshow("Examination", newspaper) cv2.waitKey(0)          

Below you tin see the output of our fully graded example image:

Figure 10: Finishing our OMR system for grading human-taken exams.
Figure x: Finishing our OMR system for grading human-taken exams.

In this case, the reader obtained an lxxx% on the exam. The simply question they missed was #4 where they incorrectly marked "C" as the right answer ("D" was the right pick).

Why non utilise circumvolve detection?

Later on going through this tutorial, you might be wondering:

"Hey Adrian, an answer bubble is a circle. So why did you excerpt contours instead of applying Hough circles to find the circles in the image?"

Great question.

To start, tuning the parameters to Hough circles on an image-to-prototype basis can be a real pain. But that'south only a pocket-size reason.

The existent reason is:

User error.

How many times, whether purposely or not, have you lot filled in outside the lines on your bubble canvas? I'thousand not proficient, but I'd have to estimate that at least one in every 20 marks a test taker fills in is "slightly" outside the lines.

And guess what?

Hough circles don't handle deformations in their outlines very well — your circle detection would totally neglect in that example.

Because of this, I instead recommend using contours and contour properties to help you filter the bubbles and answers. The cv2.findContours function doesn't care if the bubble is "round", "perfectly round", or "oh my god, what the hell is that?".

Instead, the cv2.findContours function volition render a set of blobs to you, which volition be the foreground regions in your image. You tin can then take these regions process and filter them to detect your questions (equally we did in this tutorial), and go well-nigh your mode.

Our bubble sheet examination scanner and grader results

To see our bubble sheet test grader in action, exist sure to download the source code and instance images to this post using the "Downloads" department at the bottom of the tutorial.

Nosotros've already seen test_01.png as our case earlier in this post, so let'southward attempt test_02.png :

$ python test_grader.py --image images/test_02.png          

Here nosotros can run across that a particularly nefarious user took our test. They were not happy with the test, writing "#yourtestsux" beyond the front of it along with an anarchy inspiring "#breakthesystem". They too marked "A" for all answers.

Perhaps it comes every bit no surprise that the user scored a pitiful 20% on the exam, based entirely on luck:

Figure 11: By using contour filtering, we are able to ignore the regions of the exam that would have otherwise compromised its integrity.
Figure 11: Past using contour filtering, nosotros are able to ignore the regions of the test that would have otherwise compromised its integrity.

Allow's endeavour some other image:

$ python test_grader.py --paradigm images/test_03.png          

This time the reader did a little better, scoring a 60%:

Figure 12: Building a bubble sheet scanner and test grader using Python and OpenCV.
Figure 12: Edifice a bubble sheet scanner and test grader using Python and OpenCV.

In this item example, the reader simply marked all answers along a diagonal:

$ python test_grader.py --image images/test_04.png          
Figure 13: Optical Mark Recognition for test scoring using Python and OpenCV.
Effigy 13: Optical Marker Recognition for test scoring using Python and OpenCV.

Unfortunately for the test taker, this strategy didn't pay off very well.

Let's wait at one concluding example:

$ python test_grader.py --epitome images/test_05.png          
Figure 14: Recognizing bubble sheet exams using computer vision.
Effigy xiv: Recognizing bubble sail exams using figurer vision.

This educatee clearly studied ahead of time, earning a perfect 100% on the test.

Extending the OMR and test scanner

Admittedly, this past summer/early on fall has been one of the busiest periods of my life, and so I needed to timebox the development of the OMR and test scanner software into a single, shortened afternoon last Friday.

While I was able to get the barebones of a working bubble sheet test scanner implemented, in that location are certainly a few areas that demand comeback. The nigh obvious area for improvement is the logic to handle non-filled in bubbles.

In the current implementation, we (naively) assume that a reader has filled in one and but 1 bubble per question row.

All the same, since we determine if a particular bubble is "filled in" only by counting the number of thresholded pixels in a row and and then sorting in descending lodge, this can lead to two problems:

  1. What happens if a user does non bubble in an answer for a particular question?
  2. What if the user is nefarious and marks multiple bubbling as "correct" in the aforementioned row?

Luckily, detecting and treatment of these issues isn't terribly challenging, we simply need to insert a fleck of logic.

For issue #1, if a reader chooses not to bubble in an answer for a particular row, then we can place a minimum threshold on Line 108 where we compute cv2.countNonZero :

Figure 15: Detecting if a user has marked zero bubbles on the exam.
Figure 15: Detecting if a user has marked zilch bubbling on the exam.

If this value is sufficiently big, and so we tin can marker the bubble every bit "filled in". Conversely, if full is too pocket-size, and so nosotros tin skip that particular chimera. If at the end of the row there are no bubbling with sufficiently large threshold counts, we can mark the question as "skipped" by the test taker.

A similar set of steps can be applied to issue #two, where a user marks multiple bubbling as correct for a unmarried question:

Figure 16: Detecting if a user has marked multiple bubbles for a given question.
Figure 16: Detecting if a user has marked multiple bubbles for a given question.

Again, all nosotros need to do is apply our thresholding and count step, this time keeping rails if there are multiple bubbles that have a full that exceeds some pre-defined value. If and so, we can invalidate the question and marker the question as incorrect.

What's next? I recommend PyImageSearch University.

Class information:
35+ total classes • 39h 44m video • Final updated: February 2022
★★★★★ 4.84 (128 Ratings) • 3,000+ Students Enrolled

I strongly believe that if you had the right teacher you could master computer vision and deep learning.

Practise you think learning computer vision and deep learning has to be time-consuming, overwhelming, and complicated? Or has to involve circuitous mathematics and equations? Or requires a degree in calculator science?

That's not the example.

All you lot need to master estimator vision and deep learning is for someone to explain things to y'all in simple, intuitive terms. And that's exactly what I exercise. My mission is to change education and how complex Artificial Intelligence topics are taught.

If yous're serious about learning computer vision, your adjacent stop should be PyImageSearch Academy, the most comprehensive calculator vision, deep learning, and OpenCV course online today. Here you'll learn how to successfully and confidently utilise estimator vision to your work, enquiry, and projects. Bring together me in reckoner vision mastery.

Inside PyImageSearch Academy you'll find:

  • 35+ courses on essential figurer vision, deep learning, and OpenCV topics
  • ✓ 35+ Certificates of Completion
  • 39h 44m on-demand video
  • Brand new courses released every month , ensuring y'all tin can go along upwardly with state-of-the-art techniques
  • Pre-configured Jupyter Notebooks in Google Colab
  • ✓ Run all code examples in your web browser — works on Windows, macOS, and Linux (no dev surround configuration required!)
  • ✓ Access to centralized code repos for all 500+ tutorials on PyImageSearch
  • Easy one-click downloads for code, datasets, pre-trained models, etc.
  • ✓ Access on mobile, laptop, desktop, etc.

Click here to bring together PyImageSearch University

Summary

In this blog post, I demonstrated how to build a chimera canvas scanner and test grader using figurer vision and image processing techniques.

Specifically, we implemented Optical Mark Recognition (OMR) methods that facilitated our ability of capturing human-marked documents and automatically analyzing the results.

Finally, I provided a Python and OpenCV implementation that yous can use for edifice your own chimera sheet test grading systems.

If you take whatever questions, please feel free to get out a comment in the comments department!

But before you, be sure to enter your email accost in the form below to be notified when future tutorials are published on the PyImageSearch blog!

Download the Source Lawmaking and FREE 17-page Resource Guide

Enter your email address below to go a .zip of the lawmaking and a Costless 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my paw-picked tutorials, books, courses, and libraries to help you master CV and DL!

thompsonknines.blogspot.com

Source: https://www.pyimagesearch.com/2016/10/03/bubble-sheet-multiple-choice-scanner-and-test-grader-using-omr-python-and-opencv/

0 Response to "How to Get a Scanner to Read in a List"

إرسال تعليق

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel