Labels

Recent Posts

Friday, December 19, 2014

Augmented Reality with Leap Motion, OpenCV, Three.js

Another happy exploring with IPython notebook.
OpenCV is used to calibrating the camera, and find out the position of the camera in Leap motion coordinate. Then, match the webgl camera setting, and have some fun.

See also a google hangout presentation with the help of leapmition.

Friday, September 19, 2014

Finding low and high md5/sha512 hashes with OpenCL and Python

Currently the leader of The Famous Hash Game(for MD5) and hash|challenge, with my HD7970, IPython notebook, and PyOpenCL, atitweak, and kernels modified from oclcrack and John the Ripper.
Source code can be found at http://nbviewer.ipython.org/github/tjwei/tjw_ipynb/blob/master/lowest%20md5.ipynb
and http://nbviewer.ipython.org/github/tjwei/tjw_ipynb/blob/master/lowest%20sha512.ipynb
The hash rate on my HD7970 GE is about 6.8 ghash/s for md5 and 175 mhash/s for sha512


Wednesday, September 10, 2014

Real Programmers Use One-Way Hash Function


GNU nano is a text editor - a program often used to edit the source code of other programs. Emacs, Vim and ed are all progressively more "hard core" editors. cat is a Unix program that concatenates and outputs the contents of files. Things get steadily more ridiculous from here. Using a magnetised needle to flip bits on a hard drive requires nanometer precision and binary mastery, but in the early days of programming people did use needles sometimes to fix bugs on Punched cards. The use of a magnetized needle may also be a reference to the Apollo AGC guidance computer, whose instructions were physically written as patterns of wires looped around or through cylindrical magnets in order to record binary code.  -- http://www.explainxkcd.com/wiki/index.php/378:_Real_Programmers
For a real programmer, writing binary executable directly seems to be a bit too easy, even with magnetized needles. The real hard core way is using one-way hash functions, like sha2 or md5.

In the spirit of Great Python Challenge, which were used to promote PyCon APAC 2014, I posted a challenge on my Facebook, aksing if anyone can use one-way hash function to write program, for example:
cat 0.c| shasum -a 384 | xxd -p -r > a.out && chmod a+x a.out && ./a.out
sha384 was chosen because it is possible to fit an ELF binary into sha384 digest (see http://www.muppetlabs.com/~breadbox/software/tiny/teensy.html).

After I posted the challenge, I was trying to find a solution myself too. It turns out that it is possible to do so.
Follow below steps (Copy& Paste) to try the code:
mkdir test
cd test
wget https://raw.githubusercontent.com/tjwei/tjw_ipynb/master/0.c
sh 0.c
gcc 0.c && ./a.out
cat 0.c | shasum -a 384 | xxd -p -r > a.out && chmod a+x a.out
./a.out


The story

The first solution is https://github.com/tjwei/tjw_ipynb/blob/master/a.c
Sort of working, but I am not satisfied because I'd expect it can at least say "Hello World!".
My 2nd attpemt is https://raw.githubusercontent.com/tjwei/tjw_ipynb/aa733143a5a76f460122e93028f49693d53931e8/0.c
which is not bad, it does say "Hello World!", but still has some issues I don't like.
I realized that I have to search the solution harder. First thing I did was to reduce the length of the code to fit in 112 bytes, so that the whole code can be fit in one sha512 block. This can speed up the hashing by a factor of 5~7. In order to do that, all description, which was in the comment of the code,  were moved to a webpage and the url is shortened.
Then, I use OpenCL and a customized kernel to do the sha384 hashing. On my HD7970 ghz edition, it can do 160 mhash/s, without overclocking the card.
You can find the final result: https://github.com/tjwei/tjw_ipynb/blob/master/0.c
And working notebook at:
http://nbviewer.ipython.org/github/tjwei/tjw_ipynb/blob/master/RealMan2.ipynb

And

Since it is a challenge, I do receive some replies. An answer from Kuo-Tung is quite good too, see  http://paste.plurk.com/show/1982392/

Tuesday, September 9, 2014

Poor man's Holograph



This is a simple proof of concept, using an Android Phone as a Swept-volume display. Above showing a cube, two of the sides display english letters. Because the refresh rate of the the my android screen the the frame rate of my video recoreder, the actual result is better than it seems on the video (kind of like recording a CRT wtih a video recoreder)

  You can find a lot of POV Display videos on youtube

 

  and Volume Display like Led Cube




 Swept-volume display is something in between of led cude and POV.
The good thing is X,Y resolution is much better than LED cube, the phone (old acer stream) has a 480x800 AMOLED screen. However, the resolution on Z-axis is ver low, 6~8 pixels if we are lucky.

A GIF image can be run as Python, Ruby, JavaScript, and Jave program.

The above image, face6.gif, is a proper GIF can be run as python script in window/linux/mac/cygwin.
That is, try
python face6.gif
in command line, it can be executed (by CPython).
Moreover, it can run as ruby, perl, and java program.
ruby -x face6.gif
perl -x face6.gif 
java -jar face6.gif

If you rename face6.gif to face6.html and open it with a web browser, the  javascript  code with print "Python rocks".

You can also try
rar x face6.gif
and not suprising, you can use also open it with any image viewer.

Since it is a jar file, so using unzip -v face6.gif,  you can find java classes. It also contains a  __main__.py, but __main__.py won't be executed by CPython. This is because CPython does not like zip files with comment.

Enhace PyCon APAC video recordings with the help of slides and OpenCV


The quality of some of the video recordings of PyCon APAC 2014 are really poor (see below examples). Since most of speakers release their slide files under creative common license, we thought we might use these slides to repair the video.
The basic ideal to render the slides as images, and match the video recordings, and then use the rendered slides to replace the poor images in the video recordings.
First, I was thinking about some like feature matching and tried to use ORB to find key points on videos and slides. Unfortunately,  the quality of video recordings is so poor, that feature matching does not work, at least not work if we use ORB.
My second attempts is using linear algebra to do template matching. It turns out that CV_TM_CCOEFF_NORMED method works good enough.
However, we need to manually find the position of the slide image in the video recordings. The position depends on speakers laptop and perhaps the video connector. But it seems to be fixed for the entire talk. So we only need to adjust the parameter once for each talk.
With the help of IPython interactive widgets, it is not too difficult to do that manually. And by using SIFT feature matching, we can find the parameters semi-automatically.
All we need to do manually is to find a frame of video recordings and a rendered image of the slides that match each other.
I guess it won't be to hard to make the whole process full automatic, but the semi-automatic tools is good enough for our original purpose.
The followings are the tools we uses.

The following is the outline of our process
  • Extract pdf as a series of png images. Althoug there are python modules like wand can do that in python, we shamelessly using shell call to do that with ImageMagick convert.
  • Find out the coordinates and size  of the slides in the video recording using interactive tool
  • convert the images to 256x256 gray scale, using CV_TM_CCOEFF_NORMED to find the matching slide. 256x256 is a quite arbitrary choice, and perhaps more than enough. Perhaps 128x128 should work just fine.
  • The cut-off parameter does not need to be to precise. Some value between 0.5 and 0.95 should work most the time.
  • Manually put some slides into black list. This is mostly because pdf files does not contain the slides animation that used in the actual talk. If the pdf includes the animation that can be generated into a series of png images, then our algorithm works very well. 
  • Because don't know how to encode video with audio with OpenCV, So shamelessly call the avconv to do merge the generated video and original audio track.  
 Followings are some scree shot of the enhanced videos and original videos.
Toomore's talk.  This recording is the "motivation" of this project. 
The text of original recording is unreadable.



Tseng's talk. The quality of the video recording is equally poor. 

The final result is quite good, but the slides has been updated, so the match failed for the cover slide.

Cheng-Lung Sung's talk is different, seems like perspective transformation might be needed.
We thought we might need to modified our interactive tools, however, affine transformation works.






Even with recordings with better quality, like the recording of Andy's talk, our tools still enhance the quality significantly.







feature matching in the interactive tool


The followings are the enhanced videos 

Solve HITCON 2014 CTF with IPython notebook


This my first CTF, try to solve it with IPython noteook. I only got 7 flags, but solve it with IPython notebook is a fun experience for me, very interactive and I can visualize my progress. See QR maze video below. Using IPython notebook and the power of python, the telnet based maze can be easily visualized in a web browser, and I won't lose my progress between the telnet connections, because every thing is kept in the IPython kernel.
Most of ipynb files can be found at http://nbviewer.ipython.org/github/tjwei/tjw_ipynb/tree/master/