## Monday, September 10, 2012

### Student Response Systems

There are plenty Student Response Systems (SRSs) out there. Which one to choose? This brief document summarizes what I have found on the web before the Fall of 2012.

The systems differ in a few characteristics. The first is what type of devices the students can use. Classical, iClicker like systems require the student buy a device which communicates with a receiver that the teacher should possess. Since our school uses iClickers, let me focus on them. The overall cost of first generation iClickers is $10/device, assuming the student sells back the device to the bookstore (which they can). This is not a major cost but who likes to pay when you don’t have to? The first limitation of iClicker-like systems is that they are bound to the smart classrooms and the computers there. Thus, if you are like me and use your own computer for projection, you will need to switch between screens to show the results of a poll. This makes the use of iClicker quite cumbersome. A second major limitation of iClickers is that they are limited to multiple choice questions. In particular, they don’t allow free text entry, numerical responses, or other type of test questions (like matching). Free text and numerical responses are very useful for assessing the knowledge of students and designing meaningful multiple choice questions is hard and time-consuming. # Systems accepting text input from non-smart phones An alternative to iClickers is to use systems that uses Wifi networks or the texting capabilities of conventional phones with texting capabilities. Wifi capable devices include smartphones, tablets and laptops. The systems that I have found that support texting (i.e., receiving input from non-smart phones) are Poll Everywhere, LectureTools and Top Hat Monocle. These also support input from Web-capable devices. Their pricing differs quite a bit. Poll Everywhere allows instructors to pay for the semester. The price currently is USD350. LectureTools requires you to pay for two semesters at the price of USD800. Top Hat Monocle does not allow the instructors to pay. The per student price is USD20 for a semester, or USD38 for five years. As said before, all these systems support Wifi and texting. I had a chance to test Poll Everywhere and LectureTools. I had some problems with LectureTools (importing my slides did not work). The concept of LectureTools is that you keep your slides on the web, tightly integrated with the questions. They support during the presentation annotation of slides, which is nice. However, overall LectureTools was not as smooth and easy to use as Poll Everywhere. For example, I could not figure out given the limited amount of time how the students will connect on the web to my questions. Poll Everywhere was really easy to use, on the other hand. It supports Powerpoint and Keynote. Compared to Top Hat Monocle, Poll Everywhere is not as feature rich (Top Hat Monocle has games, for example), but I was happy with the functionality Poll Everywhere provided. # Systems that use the Web-capable devices If one is contempt with Web-enabled devices, the number of available SRSs soars high. In fact, one can use any Web-based solution, many of them being free. Starting with the free options, of course, one can create a quiz in Moodle. Controlling what the students see and what they don’t is pretty cumbersome. Moodle is not very well suited for the purpose of live polling. Another method is to use Twitter. Students are pretty excited about Twitter (from the feedback I got), though I would be careful projecting everything that comes in to the screen -- some moderation might be essential to keep the class under control. Another problem with Twitter is that a tool for analyzing responses to questions would be needed (and I know of no such tool). The next option is to use Google Forms. The very idea of Google Forms is that information submitted on the web is sent to a Google spreadsheet. Since you need the spreadsheet to get the URL of the form, start by creating a spreadsheet, then insert a form there. Once the form is created (give it a cool skin!), get back to the spreadsheet to get the URL to it. You will send this to the students (maybe compressing it using tinyurl.com, or a similar service). You can control the timing of when the form is accepting input from the spreadsheet. To support ad hoc questions, you can just create an empty form and recycle it through your presentation. If you want to use a fixed set of questions, you will need one form (and thus spreadsheet) per question that you can store on your google drive. The downside of Google Forms is that students can submit as many responses as they wish. With Google Apps, presumably there is a way around this, but this would need to be investigated. Flisti is an extremely simple web-based polling systems (I guess there are many other similar systems). You go to their webpage, create the poll there and give the poll’s URL to the students. You can view the results of the poll online. I think that users are tracked based on their IP addresses, so no multiple submissions are possible from the same IP address to the same poll. Only multiple choice questions (with multiple answers, possibly) are supported. Socrative is a fuller Web-based SRS currently under beta-testing. During the beta-testing phase, the system is free. The web-based interface is nice and sleek and it was extremely user-friendly. The teacher can control in real-time which questions are “live” in his/her “classroom”. The classroom is identified by a numbe, that the students go to. The only issue with Socrative is that every activity is limited to 50 students. QuestionPress is another commercial system. The price for my class is$66. This seems to be a mature systems that I was truly impressed by. All interactions are Web-based. ClickerSchool is similar to QuestionPress, the price is $95 for my class for one semester. ClickerSchool are provided specialized smart phone apps (both for iOS and Android). eClicker, on the other hand, requires the teacher to buy a software. All their software supports Apple products only (Mac OSX and iOS). SRN (studentresponsenetwork.com) offers a campuswide license for$195. This is a client-server system that requires installing software on the teachers’, as well as the students’ computers.
I could not find pricing information on the web for TurningPoint/ResponseWare, which seems to be a mature product (but it cannot be “tried”). The same goes for vClicker.

# Which type of system to choose?

The question remains: Which type of system to choose? One factor to consider is how important it is to have other than multiple choice questions in class. Personally, I think that multiple choice questions exist only for historical reasons -- their pedagogical value is rather questionable. Good multiple choice questions are extremely hard and time consuming to create. If this is not convincing enough and you don’t mind switching the projector back and forth between your computer and the one in the classroom, you can stay with iClickers.
In the opposite case, the next thing to consider whether you want to support phones with text input. I have just polled my students and out of the 52 responses so far, 43 can and are willing to bring their laptops to the classroom, 37 carry a smartphone, 6 a tablet and 8 carry a non-Web enabled phone (these are overlapping groups of students). Thus, the vast majority of students are able to use a system that is built around the Web. Since it will always be hard to achieve 100% coverage, one idea is to let the students pair up or form groups of three. Based on the statistics I gathered, overall, I am leaning towards that support of texting input should not be viewed as a major advantage.
However, it should also be mentioned here that a potential danger of using laptops or other fancy, Web-enabled devices is that they represent potential sources of distraction. Thus, with the use of these devices, the teacher will need to face the challenge of competing for the students’ attention with the social networks, email, and ultimately the whole Web.
The next factor consider is the ease of use of the SRS. The teacher may need to create a significant number of questions for every class. Integration with Powerpoint and Keynote may be a plus, but switching between a browser and a presentation software looks easy enough.
Since I will use the chosen SRS only for formative assessment and not for grading (the present common sense is that this would be a bad idea, not talking about that this would indeed require full coverage of the whole class), I don’t care about whether the SRS supports automated grading and keeps the identities.
Based on these considerations, I will probably go with Socrative.

References
I have created a google spreadsheet for comparing the systems listed here, in addition to a few more. The spreadsheet can be found here. The spreadsheet links a few other sources that I have used during my research.

## Saturday, April 14, 2012

### Approximating inverse covariance matrices

Phew, the last time I have posted an entry to my blog was a loong time ago.. Not that there was nothing interesting to blog about, just I always delayed things. (Btw, google changed the template which eliminated the rendering of the latex formulae, not happy.. Luckily, I could change back the template..) Now, as the actual contents:

I have just read the PAMI paper "Accuracy of Pseudo-Inverse Covariance Learning-A Random Matrix Theory Analysis" by D Hoyle (IEEE T. PAMI, 2011 vol. 33 (7) pp. 1470--1481).

The paper is about pseudo-inverse covariance matrices and their analysis based on random matrix theory analysis and I can say I enjoyed this paper quite a lot.

In short, the author's point is this:
Let $d,n>0$ be integers. Let $\hat{C}$ be the sample covariance matrix of some iid data $X_1,\ldots,X_n\in \mathbb{R}^d$ based on $n$ datapoints and let $C$ be the population covariance matrix (i.e., $\hat{C}=\mathbb{E}[X_1 X_1^\top]$). Assume that $d,n\rightarrow \infty$ while $\alpha = n/d$ is kept fixed. Assume some form of "consistency" so that the eigenvalues of $C$ tend to some distribution over $[0,\infty)$. Denote by $L_{n,d} = d^{-1} \mathbb{E}[ \| C^{-1} - \hat{C}^{\dagger} \|_F^2]$ the reconstruction error of the inverse covariance matrix when one uses the pseudo-inverse.

Then, $f(\alpha) := \lim_{d,n\rightarrow\infty} L_{n,d}$ will be well-defined (often). The main things is that $f$ becomes unbounded as $\alpha\rightarrow 1$ (also $\alpha\rightarrow 0$, but this is not that interesting). In fact, for $C=I$, there is an exact formula for $\alpha \rightarrow 1$:

$f(\alpha) = \Theta(\alpha^3/(1-\alpha)^3)$.
Here is a figure that shows $f$ (on the figure $p$ denotes the dimension $d$).

Nice.

The author calls this the "peaking phenomenon": The worst case, from the point of view estimating $C^{-1}$, is when we have as many data points as is the number of dimensions (assuming full rank $C$, or this won't make sense because otherwise you could just add dummy dimensions to improve $L_{n,d}$). The explanation is that $L_{n,d}$ is just sensitive to how small the smallest positive eigenvalue of $\hat{C}$ is (this can be shown), and this smallest positive eigenvalue will become extremely small as $\alpha\rightarrow 1$ (and it does not matter whether $\alpha \rightarrow 1-$ or $\alpha\rightarrow 1+$).

Now notice that having $\alpha=0.5$ is much better than having $\alpha=1$ (for large $n,d$). Thus, there are obvious ways of improving the sample covariance estimate!

In fact, the author then suggests that in practice, assuming $n \ge d$, one should use bagging, while for $n\le d$ one should use random projections. Then he shows experimentally that this improves things though unfortunately this is shown for Fisher discriminants that use such inverses and the demonstration for $L_{n,d}$ is lacking. He also notes that the "peaking phenomenon" can also be dealt with by other means, such as regularization where we are referred to Bickel and Levina (AoS, 2008).

Anyhow, one clear conclusion of this paper is that the pseudo-inverse must be abandoned when it comes to approximate the inverse covariance matrix. Why is this interesting? The issue of how well $C^{-1}$ can be estimated comes up naturally in regression, or even in reinforcement learning when using the projected Bellman error. And asymptotics is just a cool tool to study how things behave..

## Wednesday, May 4, 2011

### Brains, Minds and Machines

Former UofA student, Alborz, shared a link to a video recording of a recent MIT150 symposium on Brains, Minds and Machines on facebook. I watched the video yesterday (guess what, I need to mark 40 something finals, hehe:)).
I wrote a comment back to Alborz on facebook and then I thought, why not make this a blog post? So, here it goes, edited, expanded. Warning: Spoilers ahead and the summary will be biased. Anyhow..

The title of the panel was: "The Golden Age — A Look at the Original Roots of Artificial Intelligence, Cognitive Science, and Neuroscience " and the panelist were Emilio Bizzi, Sydney Brenner, Noam Chomsky, Marvin Minsky, Barbara H. Partee and Patrick H. Winston. The panel was moderated by Steven Pinker who started with a 20-30 minute introduction. Once done with this each of the panelist delivered a little speech and at the end there were like two questions asked by Pinker.

My heroes in the panel were Minsky and Winston. They rocked! Minsky almost fell asleep during his talk, but he was well aware of this and I loved him. He told a story about Asimov not wanting to come to his lab to see the real robots (he did not want to get disappointed) and about von Neumann who said that he does not know if Minsky's thesis could qualify as a thesis on mathematics (they were both at Princeton in the math department), but soon it will be. I really enjoyed this part. Winston acted a bit like a comedian. I did not mind this either. One thing that Minsky and Winston both said is that the mistakes happened when AI became successful and everyone from that on seemed to forget the science part of AI. But they did not say much about how we can get back on the track (except that we should try). Winston blamed the short-sightedness of funding agencies and who would disagree.

Chomsky made some interesting claims. He claimed that language is designed for thoughts and not for communication. This was a pretty interesting claim. He claimed other things supporting his idea about innate, universal grammars, but I did not know how much credit to give to those as I have many linguist friends who strongly disagree. Things became interesting when answering a question at the end, he dumped the whole of machine learning. He talked about "a novel scientific criterion" that was never heard of before referring to being able to predict "on unanalyzed data". He said that "of course" with enough data you will do better, but it seemed that he thinks that the evaluation criterion is already ridiculous. He also said that a little statistics does not hurt, but he still seems that the big deal is the engineering part. (He did not say with these words, but this is what I got from what he said).

Sydney Brenner (pioneer in genetics and molecular biology) was puzzled about why he was invited, though he had some good stories. I liked when he said that in 50 years people will not understand why everyone talked about consciousness 50 years ago.

Emilio Bizzi (a big shot in neuroscience, in particular in motor control) talked about modularity, "dimension reduction" and generalization, and he looked like a fine Italian gentleman, but I have to confess that I don't remember anything else, though this could have been because it was late.

Barbara H. Partee read a script. In the first 5 minutes or so she was mostly praising Chomsky. Then she started to talk about her own work, which was foundational in semantics. She talked about how semantics is the real thing. By semantics she means formal semantics (like in logics). While in general I am fond of this work, I am not sure if anything like this is going on in our brain and if sentences really do have a meaning in a formal sense. It seems to me that the fact that sentences can have a formal meaning is more likely an illusion, a post-hoc thought than the real thing. Ad it is unclear if bringing in formal logics is going to bring us anywhere. Unfortunately, there was no discussion of this at all. At one point she made the remark that "search engines do not use semantics", but then left us in vain about how we could do any better. Oh, well..

In summary, an impressive set of people, some nice stories, but little cutting edge science. Loads of romanticism about the 50s and 60s and no advice for the young generation. The title tells it all. It was still nice to see these people.

## Wednesday, April 13, 2011

### Useful latex/svn tools (merge, clean, svn, diff)

This blog is about some tools that I have developed (and yet another one that I have downloaded) which help me to streamline my latex work cycle. I make the tools available, hoping that other people will find them useful. However, they are admittedly limited (more about this) and as usual for free stuff they come with zero guarantee. Use them at your own risk.

The first little tool is for creating a cleaned up file before submitting it to a publisher who asks for source files. I call it ltxclean.pl, it is developed in Perl. It can be downloaded from here.
The functionality is
(2) to remove \todo{} commands
(3) to merge files included from a main file into the main file
(4) to merge the bbl file into the same main file

If you make the tool executable (chmod a+x ltxclean.pl), you can use it like this:

$ltxclean.pl main.tex > cleaned.tex How does this work? The tool reads in the source tex file, processes it line by line and produces some output to the standard output stream, which you can redirect (as shown above) to a file. Thus, whatever the tool does is limited to the individual lines. This is a limitation, but this made it possible for me to write this tool in probably less time than I spend on writing about it now. There are other limitations, see below. Now, how do we know that this worked? The advice is to run latex+dvips and then diff original.ps new.ps to see if there is any significant change. On the files I have tried, the only difference was the filename and the date. Why this functionality and the glory details As it happens, removing the comments before you submit a source file is crucial. Not long ago, it happened to me that I have submitted a source to a publisher and I did not care about removing the comments. At the publisher, they loaded the file into a program, which wrapped the long lines, including the ones with comments! This created a lot of garbage in the middle of the text. We were pressed against time, though I could not check the text in details. The result: The text was printed with a lot of garbage! Too bad!! A painful experience for me.. I will never again submit source files with the comments kept in the file! Now, the above utility is meant to handle comments correctly. It pays attention to not to create empty lines (and thus new lines) inadvertently, not to remove end-of-line comments etc. The \todo{} commands belong to the same category: They are better removed before submitting the file. For my todos, I use the todonotes package, which puts todo notes on the margin (or within the text). This package supplies the \todo[XX]{ZZZ} command, where [XX] is optional. The above little script removes such todo commands, but only if they span a single line only. For now, you would need to remove multi-line todos by hand. Another service of this little tool is to merge multiple files into a single one. Oftentimes, we use the latex command \input to break a large source file into multiple files. However, publishers typically want just one file. So this tool reads in the main file and the recursively, whenever it sees \input{FILE} in the source, it reads in the corresponding file and processes it before it continues with the current file (just like latex would work). Finally, if the tool finds a \bibliography{...} command, it will take that out and open the .bbl file sharing the same base name as the input to the tool. Thus, if the tool was called on the file main.tex, when seeing a bibliography command, the tool will attempt to open main.bbl and include it in place of the \bibliography command. (If you use hyperref, turn off pagebackref, otherwise this functionality will not work.) Managing revisions with svn Two other small utilities that I make available are svnprevdiff and svnreviewchanges. The purpose of these scripts is to help one review changes to files which are under svn control. There is a third script, diffmerge, called by the above two scripts. This script takes two file arguments and loads these into the program DiffMerge which allows you to visually inspect the differences between the two files and make changes to the second one loaded. On a different platform/installation, or if you want to use a different tool for comparing/merging files. The utility svnreviewchanges takes a file as an argument, compares it to its base version stored on your disk and opens up the two versions for comparison using diffmerge. The purpose is to allow one to quickly review how a file was changed before submitting a file to the svn server (so that you can write meaningful comments in the commit message). The utility svnprevdiff takes a filename as an argument, compares it to its previous version stored on the svn server and then opens up the two versions using diffmerge. The purpose of this is to check the changes implemented by your pals after an update. A future version will take an optional argument which when present will be interpreted as a revision number. Maybe. Advice on using latex when working in a team: Break long lines A small, but useful thing is to put every sentence on its own line and generally avoiding long lines (even when writing equations). The reason is that this will make the job of diff much easier. And believe me, diffing is something people will end up doing for good or bad (mostly good) when they are on a team. Some of my friends, like Antoska would recommend breaking up the individual sentences into multiple lines. You can do this, but if you overdo it, you will find yourself fiddling way too much with what goes into which line. Finally, a tool which does this, written by Andrew Stacey, is fmtlatex.pl. This is also in Perl and its documentation will be written on the screen if you use perldoc fmtlatex. I still have to try this. ### I can run Matlab on my Mac again! After much struggling today I managed to make Matlab run again my Mac. The major problem was that Matlab complained about that I have the wrong version of X11 installed on my system and it won't start. As I have finished teaching today for the semester, I thought that I am going to celebrate this by resolving this issue which I was struggling with for a year or so by now. On the internet you will see a lot of advice on what to do, and as they say, the truth is indeed out there, however, it is not so easy to find. In a nutshell what seems to happen is this: Why Matlab does not start when other applications do start (say, Gimp, Gnuplot using X11, etc.). Matlab seems to make the assumption that the X11 libraries are located at /usr/X11/lib and it sticks to this assumption no matter how your system is configured. I use XQuartz and macports' X11 and they put stuff elsewhere. I had some legacy code sitting in /usr/X11/, which I did not use. It was a remainder of some version of X11 that I used probably 2 or 3 laptops ago. Matlab reported that the lib was found, but the "architecture was wrong". The error message had something like: .. Did find: /usr/X11R6/lib/libXext.6.dylib: mach-o, but wrong architecture.. Anyhow, here is one solution. You have to arrange that /usr/X11 points to a directory that has a working X11 copy. It is probably a good idea to first clean up the old X11 installation. You can do this by following the advice on the XQuartz FAQ page by issuing the following commands in the terminal: sudo rm -rf /usr/X11* /System/Library/Launch*/org.x.* /Applications/Utilities/X11.app /etc/*paths.d/X11 sudo pkgutil --forget com.apple.pkg.X11DocumentationLeo sudo pkgutil --forget com.apple.pkg.X11User sudo pkgutil --forget com.apple.pkg.X11SDKLeo sudo pkgutil --forget org.x.X11.pkg Then I have reinstalled the latest XQuartz copy (not all these steps might be necessary, but in order to stay on the safe side, I will describe everything I did). I also have macports and xorg-libX11, xorg-libXp, xorg-server seems necessary for the following steps to succeed (but possibly other xorg-* ports are also needed). I am guessing that XQuartz does not install all the libraries, but after installing enough xorg-* ports through macports, all the libraries will be installed which are used by Matlab. Now, my X11 is located at /opt/X11 and some additional libs are found at /opt/local/lib. So I created a bunch of symbolic links: sudo ln -s /opt/X11 /usr/X11 for i in /opt/local/lib/libX* ; do sudo ln -s$i /usr/X11/lib; done

The first line creates a symbolic link to /opt/X11, while the second is necessary because of the additional libX* libraries which, for some reason, macports puts into /opt/local/lib instead of puttting it into /opt/X11/lib. Initially I did not know that I need these libs, and then Matlab complained that it did not find the image for some lib (it was /usr/X11/lib/libXp.6.dylib).

Anyhow, I am really happy that this worked!
I hope people who will have the same trouble will find my post useful.

## Tuesday, November 17, 2009

### Djvu vs. Pdf

Long blog again, so here is the executive summary: Djvu files are typically smaller than Pdf files. Why? Can we further compress pdf files? Yes, we can, but the current best solution has limitations. And you can forget all "advanced" commercial solutions. They are not as good as a free solution.

Introduction

DJVU is a proprietary file format by LizardTech. Incidentally, it was invented by some machine learning researchers, Yann LeCun, Léon Bottou, Patrick Haffner and the image compression researcher Paul G. Howard at AT&T back in 1996. The DJVULibre library provides a free implementation, but is GPLd and hence is not suitable for certain commercial softwares, like Papers, which I am using to organize my electronic paper collection. Hence, Papers, might not support djvu in the near future (the authors of Papers do not want to make it free, and, well, this is their software, their call).
Djvu files can converted to Pdf files using ddjvu, a command line tool which is part of DJVULibre (djvu2pdf is a script that calls this tool). Djvu can also be converted into PS files using djvups (then use ps2pdf). However, all these leave us with pretty big files compared to the originals and, on the top of it, if there was an OCR layer in the Djvu file, it gets lost, but this is another story. How much bigger? Here is an illustration:

Original djvu file: 9.9MB
djvu2pdf file: 427.6MB(!)
djvu2ps file: 1.0GB
djvu2ps, ps2pdf file: 162.6MB

Note that I have turned on compression in the conversion process (-quality=50). (The quality degradation was not really noticeable at this level.) So, at best, I got more than 16 times the original file size. Going mad about it, I started to search the internet for better solutions. I have spent almost a day on this (don't do this, especially if you are a student!)..

JBig2 and the tale of commercial solutions

First, I figured, the difference is that these use general image compression techniques (like jpeg), while djvu is specialized to text and black&white images. Thus, for example, it can recognize if the same character appears multiple times on the page, store a template and a reference to the template. This is clever. I then figured that PDF files support the so-called jbig2 encoding standard, which is built around this idea. Hence, the quest for software that would support encoding a document using a jbig2 encoder and put the result into a pdf format. The easiest would be, if such a software just existed out there. A few commercial packages indeed mention jbig2. I felt lucky (especially, seeing that there are a few cheap ones). So, I started to download trial versions. Here are the results:

PDFJB2: 34.1MB
CVision PdfCompressor: 48MB
CVision PdfCompressor with OCR: 49MB
A-PDF: 106.8MB
A-PDF + PDFCompress: 106.8MB
djvu2pdf + PDFCompress: conversion failed

Hmm, interesting. 34MB is much better than 160MB, but it is still a long way from 9.9MB. (After a superficial look at the resulting files I concluded that only the A-PDF compressed file lost quality. What happened with this file is that on some page in some line containing a mathematical formula, the top of the letters got chopped.)

Free, open source solutions

Becoming desperate, I continued hunting for better solutions. Searching around, I have found iText, which is an open source, free Java library supporting all kinds of manipulations of Pdf files. I have figured that it "uses" Jbig2, but it was not clear if it uses it for compression or just knows how to handle the encoding. So, here I go, I wrote a java program opening a pdf file and then writing it out in "compressed" mode. Hmm, this few lines of coding allowed me to create a file of size 26MB, smaller than what I could ever get previously. Exciting! Unfortunately, opening the file revealed the `secret': Quality was gone. The file looked to be seriously downsampled (i.e., the resolution was decreased). Not good.

Then I have found pdfsizeopt on google code, which aims exactly at compressing the size of pdf files! The Holy Grail? Well, installing pdfsizeopt on my mac was far from easy (I use a Mac, which also runs Windows; quite handy as some of the above software runs only under Windows..). However, finally, I was able to run pdfsizeopt. Unfortunately, it seems to crash, without even looking at my pdf file (I hope the bug will be corrected soon and then I can report results using it). Along the way, I had to install jbig2enc. For this, I just had to install leptonica (version 1.62, not the latest one), which is really the part that is doing the image processing part of the process. JBig2Enc expects a tif file and produces "pdf" ready output (every page is put in a separate file), which can be concatenated into a single pdf file by a python script provided. Having jbig2enc on my system, I gave it a shot. I first used ddjvu to transform the input to a tif file (using the command line option, "-quality=75", resulting in a file of size 1GB). Then I used the jbig2 encoded with the command line arguments "-p -s". The result is this:

jbig2enc: 3.8MB

Wow!! Opening the file revealed a dirty little secret: Color images are gone, as well as the quality of some halftoned gray-scale images got degraded. However, line drawings were kept nicely and, in general, the quality was good (comparable to the original djvu file). Conversion to tif took 5 minutes, conversion from tif to jbig2 took ca. 4 minutes, altogether making the whole process take close to 10 minutes. (Other solutions were not faster at all either. And the tests were run on a resourceful MacBook Pro.)

Conclusions

jbig2enc seems to work, but you will lose colors. If you are happy with this, jbig2enc is the solution, though the process should be streamlined a bit (a small script good do this). Oh yes, I did not mention that these processes are not fast. I did not attempt to measure the speed, but conversion takes a lot of time. Jbig2Enc is maybe on the faster end of the spectrum.

Future work
1. pdfsizeopt is a good idea. It should be made work.
2. It would be nice to create a jbig2enc wrapper
3. ddjvu is open source: Maybe it can be rewritten to support jbig2 directly. The added benefit could be that one could also keep the OCR layer in the original djvu file if one existed
4. Along the way, I have found a cool google code project, Tesseract, which is an open source OCR engine. How cool would it be if we had an OCR engine that helps the compression algorithm and eventually also puts an OCR layer on the top of documents which lack text information (think of scanned documents, or documents converted from an old postscript file). Currently, I am using Nuance's Pdf Converter Professional (yes, I paid for it..), which I am generally very satisfied with apart from its speed. However, this could be the subject of another post.
PS: I have tested the capabilities of Nuance's Pdf Converter Professional and Abbyy's in terms of their compression capabilities:
Nuance: 132MB
Abbyy: 129MB
Yes, I tried their advance "MRC" compression, in Nuance I have explicitly selected jbig2. No luck.

## Saturday, November 14, 2009

### Keynote vs. Powerpoint vs. Beamer

A few days ago I decided to give Keynote, Apple's presentation software, a try (part of iWork '09). Beforehand I used MS Powerpoint 2003, Impress from NeoOffice 3.0 (OpenOffice's native Mac version) and LaTeX with beamer. Here is a comparison of the ups and downs of these software, mainly to remind myself when I will reconsider my choice in half a year and also to help people decide what to use for their presentation. Comments, suggestions, critics are absolutely welcome, as usual. Btw, while preparing this note I have learned that go-oo.org has a native Mac Aqua version of OpenOffice. Maybe I will try it some day and update the post. It would also be good to include a recent version of Powerpoint in the comparison.

### Stability

• Keynote: Excellent
After a few days of usage, so take this statement with a grain of salt..
• MS Powerpoint 2003: Excellent
• Impress: Poor
• Beamer: Excellent

### Creating visually appealing slides, graphics on slides

• Keynote: Excellent
Positioning rulers help a lot. The process is really smooth. Keynote forces you to use less text. Built in templates are professional looking. Adding presentation graphics (tables, basic charts) is very easy. Cooler (technical drawing) better done with OmniGraffle. You can also easily animate the graphics, tables. Overall, very impressive.
• MS Powerpoint 2003: Good
Aligning to other objects is more cumbersome than in Keynote. The quality of fonts, color palettes, templates is not as good in Keynote.
• Impress: Good
Same as MS Powerpoint, maybe somewhat below (but the difference is not big).
• Beamer: Poor
The fonts and styles (templates) are great. However, creating slides with lively graphic is a nightmare (due to the lack of a GUI): You will end up with a few standard layouts, you will in general not use graphics, let alone animated graphics (or you will spend days on creating your slides). Also, departing from the styles is difficult and I am just bored of some of these styles that everyone seems to use.

### LaTeX (math) support

• Keynote: Poor
Supported through LatexIt (free), but overall a cumbersome process. Details below.
• MS Powerpoint 2003: Medium
Supported through TexPoint (commercial, USD30) process is roughly same as with LatexIt and Keynote, slightly better integration.
• Impress: Medium
Supported through OOoLatex (free), same as MSPowerPoint + TexPoint, the integration is slightly better.
• Beamer: Excellent
Beamer is built for this!

### Animations

• Keynote: Near perfect
Magic slide transition helps a lot with continuity across slides. What does this do? If you have the same object on two consecutive slides, Keynote will create an animation, keeping the object on screen and flying it to its new position. Works with multiple objects, too. I have found this very helpful for presenting a multi-slide argument. In general, Keynote animations are slick, polished, the flexibility is great. I lack some features of Beamer, such as animated highlighting, in-place replacement of some text (these can all be simulated with the existing tools, but with difficulty only).
• MS Powerpoint 2003: Basic
I miss Keynote's magic transitions. In general, Keynote is richer in animations. Again, some features of Beamer would be nice to have.
• Impress: Weak
Impress is inferior in terms of its animation caps to MS Powerpoint
• Beamer: Good
If only someone added support for magic transitions between slides. Some other cool effects would also come handy.

### Dual screen presentation support

The idea is to show notes, time left in addition to the current and next slide on your screen, while showing the current slide on the big screen.
• Keynote: Excellent
Keynote supports double screen presentations natively. If you need to swap displays, go on the notes screen in the options menu. This will be on the big screen, obviously, if you need to swap the the screens.
• MS Powerpoint 2003: Not available
I have no experience with this feature of MS Powerpoint. Maybe you can use and add-on or something, but the basic software does not support it. I am pretty sure newer versions of Powerpoint must support this.
• Impress: Excellent(?)
The "Sun Presenter Console" extension supposedly supports dual screen presentations just like Keynote, but I have never had the chance to test it. Hence, the question mark. Some posts on the internet indicate that the extension might leak memory.
• Beamer: Basic support
Use Splitshow for this purpose. However, as far as I know, you cannot show the current time or the time remaining on the notes screen.

### Interoparability

I want to put my presentations on the web so that people can look at them no matter what (major) operating system they use, without loosing animations or any other features. Another desired feature is the ability to create a compact, printable version of the slides: That is, if you have animations spanning multiple slides, somehow they should get handled intelligently. There is a tradeoff here: The more animation rich your slides are, the more bloated/complicated your printout will be.
• Keynote: OK
Proprietary file format. This is my biggest complaint. A keynote presentation is a keynote presentation. Apple likes to lock you in. Export to PDF and PPT works relatively well, but will lose some features of the presentation, like the cool animations. Exporting to PDF without animations to create printable versions seems to work well.
• MS Powerpoint 2003: Good
Free powerpoint viewers exist that can play any PPT file. Export to PDF will again lose some features.
• Impress: Good
Same as powerpoint.
• Beamer: Excellent
Produces PDF outputs: The presentations can be viewed on any computer! Also, the source is later, beamer is available on all systems. Add [handout] to the style and beamer will create an animation free version of your slides that works almost all the cases.

### More about using formulae in Keynote (and why it sucks)

I used LatexIt which produces a PDF that can be embedded into the presentation. Style is not matched automatically. The PDF contains the latex source for the formulae, copy paste it back to LatexIt to edit it. When done with the edit, you need to drag and drop the formula back into Keynote. This sucks, since you need to delete the original that you have edited, reposition the new formula and reapply animations if you had any. Horrible.

Another issue is that the source saved with the formula by default does not have the preamble, thus using a command set specific to a presentation is difficult to achieve (you have to set this up manually). Another major headache is that you will not be able to use inline formula (a text is either in LaTeX, or in Keynote, the fonts in general do not match and mix well, alignment is a nightmare), nor will you be able to animate easily formulae (e.g., displaying a multiline formula line by line requires you to split the formula into multiple PDFs and use Keynote animations to show them one by one; this is problematic because formula alignment by hand is time consuming).