Skip to main content

PYTHON / OpenCV, Recreate Uncanny Manga - Anime Style

Can you tell what it is? Computer Vision.

Yesterday, I spend almost whole day exploring this opencv module using Python. What I discovered was revealing.

Even at the very basic level, I could produce some interesting Image and Video manipulation using all the code collected from documentation and many, many blog tutorials.

If you are a total noob like me, I am still getting used to knowing that the CV in OpenCV means Computer Vision!

Actuallly, I recalled that I did try to get into OpenCV few years back ago, when I knew no Python and when Python opencv module was probably still early. It was all C++ code and it was a little bit too hard for me. I read a couple of books about opencv at the library, I did not understand a single thing. That was back then.

Today, for strange reason, with a bit of knowledge of Python, I can go a little further.

EDGE DETECT IN OPENCV

Me holding you know what.

What leads me this far is my curiosity on how we can replicate Wolfram Language ability to Edge Detect from Webcam so easily....

I am surprised, actually beyond surprised to find a lot of materials that take me to that destination. And even more!

The idea is of course to set a webcam and give it this "edge detect" filter.

In previous blog, I have mentioned this topic. Canny is the keyword you want. Canny does edge detect for you. The programmers that make opencv has made LOTS of really useful and rather easy to use Python commands to filter image and video. If you are new in this, you might be in for a nice treats. It was not as difficult as I thought, at least for exploration level.

I am using my old workstation desktop computer, which happens to have a good GPU and CPU from 2009, was used for Happy Feet Two. But still pretty old that now is 2016!

Edge Detection is very interesting for me personally, because I am a big fan of stylistic artwork. Manga (black and white Japanese comics) and Anime (Japanese animation).

Of course, achieving that "manga" look is not possible. Beautiful manga painstakingly drawn by mangaka (Japanese comic artists) is artform in itself. Whether you are a manga fans or not, you probably want to YouTube search this show: "Naoki Urasawa no Manben", NHK Manga Documentary. The inking and the storytelling of Manga Master ability are unmatchables by machine. Well, the truth is that we (human) + machine can go pretty far.

There is actually an app called Manga Camera that does a good just turning photo into low resolution manga drawings. There is also Paper Camera app that does similar idea. The idea is not new, but worth exploring from scratch.

Movie and manga "Sin City" actually has that interesting graphic novel look.

OK, enough my manga loving babble, I wanted to recreate that manga look made using Computer Vision. Is this going to be hard? In order to find the answer, I go through and finding as much material as possible.

After some quick searching and testing apparently, we can do it live using webcam or iPhone cam or Android cam, whatever. I am using Ricoh Theta S the other day and it works quite nicely.

I achieved this look just be multiplying Edge Detect Outline and Threshold filter. I did not really write the actual hard job parts, I am just using the "LEGO block" from opencv.

It still needs tweaking to get that 'alright' look. It is not perfect and still so basic, but it is a start:




I still need to figure out how to re-create crosshatching or patterns overlay application to video.

The lighting apparently plays a very strong effects for the final look.

LOTS OF OPENCV LINKS

Good start up exercise using Jupyter:
http://cbarker.net/opencv/

Some links that help me to achieve manga/anime look for video:
http://www.askaswiss.com/2016/01/how-to-create-cartoon-effect-opencv-python.html
http://blenderartists.org/forum/showthread.php?335156-Computer-Vision-in-Blender-(OpenCV)-TUTORIAL
http://www.bogotobogo.com/python/OpenCV_Python/python_opencv3_Image_Global_Thresholding_Adaptive_Thresholding_Otsus_Binarization_Segmentations.php
https://opencv-python-tutroals.readthedocs.org/en/latest/py_tutorials/py_imgproc/py_thresholding/py_thresholding.html#simple-thresholding

When opencv output video not saving correctly, check webcam resolution or...
http://answers.opencv.org/question/53497/canny-edge-detection-applying-to-video-but-converted-video-is-not-saving/


http://docs.opencv.org/2.4/doc/tutorials/imgproc/threshold/threshold.html
http://www.bogotobogo.com/python/OpenCV_Python/python_opencv3_Image_Hough%20Circle_Transform.php
http://www.learnopencv.com/opencv-threshold-python-cpp/
http://stackoverflow.com/questions/11522755/opencv-via-python-on-linux-set-frame-width-height
http://stackoverflow.com/questions/28448276/why-are-my-grayscale-images-showing-up-in-black-and-white
http://www.pyimagesearch.com/2014/06/02/opencv-load-image/
http://www.robindavid.fr/opencv-tutorial/chapter1-starting-with-opencv.html
https://github.com/sagivo/algorithms

Old school edge detect, still beautiful
https://prak04.wordpress.com/tag/opencv/
http://www.bogotobogo.com/python/OpenCV_Python/python_opencv3_Image_Gradient_Sobel_Laplacian_Derivatives_Edge_Detection.php
http://stackoverflow.com/questions/21010967/iphone-ios-convert-photos-into-manga-style

http://www.answerandquestion.net/questions/5095371/convert-grayscale-image-to-cross-hatch-shading

Computer Vision Detect Manga Text
https://github.com/johnoneil/MangaTextDetection


Actually, a rather simple manga/anime effect could be created roughly. Although for more painterly or stylistic, not just filtering or abstraction, it needs further tweaking. Sometimes I found that the pre-made code is slow... so to be able to write own faster algorithm is better.

INTERESTING THINGS I FOUND A LONG THE WAY

For lots of code, I have to thank all the blog writers and documentation for opencv out there. It helps a lot. There are some good YouTube videos as well.

I have not invented a new code, but I manage to use the "LEGO blocks" of codes to let it do what I tried to do at the basic level.

Anyways, I found a lot of interesting "discoveries".

DIFFERENT VERSION OF OPENCV
The development of open source sometimes faster than iteration of iPhone *kidding*. Which means, we gotta be quick and always use the latest. Sometimes finding the one that works for you can be a little bit tricky.


Read also:
http://docs.opencv.org/


RESOLUTION OF WEBCAMERA
When using my Logitech webcam the other day, I keep getting the resolution 640 x 480 when viewing and writing video.

I look around for solutions but cannot get it right, until now. Apparently all we need to do is to SET the propId. Now the Property ID itself is determined by number.

# This actually correctly set the resolution for the camera
cap.set(3,1280)
cap.set(4,720)

What is the propId number?
http://docs.opencv.org/3.1.0/d8/dfe/classcv_1_1VideoCapture.html#gsc.tab=0


Also read this nice post about it:
http://opencvinpython.blogspot.com.au/2014/09/capture-video-from-camera.html

Shift+TAB inside Jupyter can reveal some helpful info on a method
Yey to HD 720 webcam. 

IMAGE BITS MATTER

Of course right, 8 bit is different to 16 bit, 32 bit. But we rarely think about it. Until we found out that we cannot mix those bits. Like in Photoshop, sometimes you need to downgrade the bits, in order to work with certain filter.

RGB channel values can be
  • 0 to 255 for CV_8U
  • 0 to 65535 for CV_16U 
  • 0 to 1 for CV_32F
Just something to keep in mind. I am too still familiarizing myself with this numbers.

There are 2 kinds of opencv2 or cv2 module (!)

This is again, a gotcha for me, but you may encounter this strangeness while learning OpenCV and Python.

OLDER



NEWER

The way PROP was handled also slightly different too. Well you will eventually figure it out.

OLDER:

fps = videoCapture.get(cv2.cv.CV_CAP_PROP_FPS)
size = (int(videoCapture.get(cv2.cv.CV_CAP_PROP_FRAME_WIDTH)),
int(videoCapture.get(cv2.cv.CV_CAP_PROP_FRAME_HEIGHT)))

NEWER:
fps = videoCapture.get(cv2.CAP_PROP_FPS)
size = ( int(videoCapture.get(cv2.CAP_PROP_FRAME_WIDTH)), int(videoCapture.get(cv2.CAP_PROP_FRAME_HEIGHT)) )

BGR not RGB!

Working in greyscale or just feeding webcam and output it, one will never realize that he/she has been working in BGR color space.

Actually I just found out after I ran one tutorial and got confused with the RGB and the image appears blue... It's almost like working in different color space. Back then, I used to work as print designer, and RGB and CMYK are important.

Now, if the RGB is actually BGR ... it's confusing at first, but maybe better for numpy? I don't know.

But, I found some blog posts that explain it well:
http://art-of-ai.com/dajsiepoznac/02/opencv-intro/
http://giusedroid.blogspot.com.au/2015/04/blog-post.html
http://www.pyimagesearch.com/2014/11/03/display-matplotlib-rgb-image/

Displaying Processed Video inside Jupyter/iPython Notebook

This is actually possible! Surprised! After finding so many dead ends... Eventhough the video displayed is choppy and slow at playback, but this seems possible. Maybe in the near future, we can be more easily do this.

https://github.com/bikz05/ipython-notebooks/blob/master/computer-vision/displaying-video-in-ipython-notebook.ipynb


TEMPLATE MATCHING

The theory:
http://opencv-python-tutroals.readthedocs.org/en/latest/py_tutorials/py_imgproc/py_template_matching/py_template_matching.html

Finding Anime Face
https://github.com/nagadomi/lbpcascade_animeface

Actually "teaching" computer to see and tracking images are interesting to explore.

The next topics I am interested to explore such as:

  • HDR Imaging
  • Neural Network Deep Dream
  • Computer Learning

Have fun and be creative!

Comments

Popular posts from this blog

PYTHON PROCESSING / It only really begins ...

Back into Processing again, with Python! While Daniel Shiffman is continuously inspiring us with his CODING TRAIN series on YouTube, using Processing with Java and Javascript language, I decided to free my mind and trying to really do something using Processing and Python language. Installing Python language version on Processing 3 is easy enough, just first download the latest Processing and install the Python language mode via Add Mode button. Other link that might help: https://github.com/jdf/processing.py http://py.processing.org/tutorials/ BLANK MODE As soon as Processing Python Mode, opens up and running I am presented with a  blank environment. Suddenly I recalled my journey in learning programming from zero until now... With Python, outside Processing, commonly people will be introduced to Python IDE or IDLE environment. Something that looks like Console Window or Command Prompt, where we type single liners command. Python Command Line and IDE normally have t

WOLFRAM / Making Text With Rainbow Color

Continuing with my Wolfram Mathematica Trial Experience... I watched and went through some more Mathematica introduction videos, read lots of Mathematica documentation and also going through the Wolfram Lab Online again a few times. There are some major learning curves and Mathematica is a lot different from normal programming language. Sometimes there is a lot of interesting "shortcuts", say like: FindFaces[] , WordCloud[] . Sometimes I got a little confused on how one can do iterations. Normally FOR LOOP concept is introduced early, but in Wolfram, because everything is EXPRESSIONS and ENTITY (like OBJECTS), sometimes it gets quite quirky. Mind you I am still in the first impression and having to look at many tutorials. Lots of NEAT EXAMPLES from documentation, but sometimes I got lost. I found Wolfram to be really awesome with LIST and generating list. It's almost too easy when it works visually. I cannot explain them with my own words yet, but there are