A few summers ago, I bought a cheap second hand DSLR on eBay and converted it to take pictures in infrared. Why you may ask? The reason is twofold: for one we do not see infrared light, and the other reason is that the pictures look familiar, yet alienated. Don’t get me wrong, I could have easily taken a picture on my iPhone and applied a filter that makes the result look like IR, but that would not be the real thing. So here I go (waiting for another simulation to finish) and writing a post on my first experiences with IR photography.
Already the old alchemists have dreamed about the ability to grow miniature beings in their petri dishes. Biotechnologists at the TU Berlin could make this so called Homunculus or “Human on a Chip” become reality (link); or at least something alike. The scientists work on a micro-organism that is capable of simulating human organs and their interactions, and all that in a scale of 1:100000. The applications of such an innovation is to replace countless human and animal test subjects for medical and cosmetic trial studies. Furthermore, this tiny organism is designed to yield results quicker, cheaper and more reliably than using mice or rabbits.
This is the final post of the series on; how I managed to 3D print the brain of my dear friend from the the fMRI files she gave me. If you were following this series of posts, you might have noticed that I spent some late nights after work tinkering with NIfTI files and some applications to get the brain structure out of it. Finally I managed to get to the end and decided to make a last post that includes a summary of what I did and what the results look like. Maybe, after your will have had your fMRI scan, you can ask the technicians to give you a copy of your brain’s scan so you can print it 🙂
During my undergraduate studies we studied a module covering information theory. It was incredibly abstract, but very relevant and interesting. Unfortunateky, this far in my degree I have not yet found any application for this knowledge. That was until my good friends and colleagues in our Brain Embodiment Laboratory (BEL) decided to share one of their information theory issues. The question was; how one can obtain, or at least estimate, the mutual information (MI) shared between two signals using a k-nearest-neighbour (kNN) algorithm. Now this may seem absurd, but after having had that chat an extraordinary motivation to refresh my memory was sparked! Mainly because it is an incredibly interesting and challenging problem, but also because I have the strong suspicion the answer will be of huge help for my Bayesian probability scheduling issue that I got stuck on last week.
So, here is my take at making sense of information theory to explain why the link between kNN and MI exists. And as a warning, there will be equations, but I will do my best to talk around them (in the case you won’t read them).
From my pervious post where I attempted to convert the output files from an fMRI brain scan into a 3D printable format, I stumbled over an application called freesurfer. Now I managed to load in the provided NIfTI (*.nii) file and let the application do some magic analysis, too. This analysis resulted in a whole collection of files that are stored in the
/Applications/freesurfer/subjects/subj_id directories. Here is my attempt of explaining what some of these files mean and what I did next to obtain the 1st proper stereolithography (STL) file that should print.