Skip to main content

A notion of function compression

The following compressibility concept is introduced by Harnik and Naor in their recent paper: Given a function $f$ over some domain, a compression algorithm for $f$ should efficiently compress an input $x$ in a way that will preserve the information needed to compute $f(x)$. Note that if $f$ is available (and efficiently computable) then compression is trivial as then $y=f(x)$ will serve the purpose of the compact representation of $x$. Actually, the concept was originally studied in the framework of NP decision problems where unless $P=NP$ $f$ is not efficiently computable, hence the trivial solution is not available.

I am wondering if this compressibility notion could be used in learning theory or function approximation? Consider e.g. classification problems so that the output of $f$ is $\{0,1\}$. In order to prevent the trivial solution we may require that $x$ be compressed to some $y$ such that for some fixed (efficiently computable) function $g$, $f(g(y))=f(x)$. We do not require that $g(y)=x$, though if $g$ satisfies this then we certainly get a solution: if a compact representation of $\{ x| f(x)=1\}$ is available then $f$ can be well-compressed. We might allow of course for some error and study approximation rates. Of course, the notation applies to real-valued or more general functions, too. What is the class of functions that can be efficiently compressed? How is this class related to classical smoothness spaces (in the case of real-valued functions)? One obvious thought is that the smoother the target function is, the better the obvious discretization approach would be. It would be interesting to find out whether this new approch could lead to new regularity descriptions by varying the restrictions on the compression.

Another related thought is to compress the training samples in a learning problem such that the solution is kept (roughly) the same.

Comments

Unknown said…
I think this question has been answered in some ways in algorithmic complexity/probability and at a more concrete instantiation, the minimum description length literature.

There is an additional element here, since in MDL you typically just want to compress $x \in X$ (or a sequence of $x_i$). However, there might be additional gains in our case, since we only care about preserving those aspects of $x$ required to approximate $f$.

Then I suppose that the question becomes in what sense you want to approximate $f$ - if you want something that works uniformly over $X$ then why should what the original input matter at all?

Popular posts from this blog

Keynote vs. Powerpoint vs. Beamer

A few days ago I decided to give Keynote, Apple's presentation software, a try (part of iWork '09). Beforehand I used MS Powerpoint 2003, Impress from NeoOffice 3.0 (OpenOffice's native Mac version) and LaTeX with beamer. Here is a comparison of the ups and downs of these software, mainly to remind myself when I will reconsider my choice in half a year and also to help people decide what to use for their presentation. Comments, suggestions, critics are absolutely welcome, as usual. Btw, while preparing this note I have learned that go-oo.org has a native Mac Aqua version of OpenOffice. Maybe I will try it some day and update the post. It would also be good to include a recent version of Powerpoint in the comparison.
StabilityKeynote: Excellent
After a few days of usage, so take this statement with a grain of salt..MS Powerpoint 2003: ExcellentImpress: Poor
Save your work very oftenBeamer: ExcellentCreating visually appealing slides, graphics on slides
Keynote: Excellent
Posit…

Approximating inverse covariance matrices

Phew, the last time I have posted an entry to my blog was a loong time ago.. Not that there was nothing interesting to blog about, just I always delayed things. (Btw, google changed the template which eliminated the rendering of the latex formulae, not happy.. Luckily, I could change back the template..) Now, as the actual contents:

I have just read the PAMI paper "Accuracy of Pseudo-Inverse Covariance Learning-A Random Matrix Theory Analysis" by D Hoyle (IEEE T. PAMI, 2011 vol. 33 (7) pp. 1470--1481).

The paper is about pseudo-inverse covariance matrices and their analysis based on random matrix theory analysis and I can say I enjoyed this paper quite a lot.

In short, the author's point is this:
Let \$d,n>0\$ be integers. Let $\hat{C}$ be the sample covariance matrix of some iid data $X_1,\ldots,X_n\in \mathbb{R}^d$ based on $n$ datapoints and let $C$ be the population covariance matrix (i.e., $\hat{C}=\mathbb{E}[X_1 X_1^\top]$). Assume that $d,n\rightarrow \infty$ …

Useful latex/svn tools (merge, clean, svn, diff)

This blog is about some tools that I have developed (and yet another one that I have downloaded) which help me to streamline my latex work cycle. I make the tools available, hoping that other people will find them useful. However, they are admittedly limited (more about this) and as usual for free stuff they come with zero guarantee. Use them at your own risk.

The first little tool is for creating a cleaned up file before submitting it to a publisher who asks for source files. I call it ltxclean.pl, it is developed in Perl. It can be downloaded from here.
The functionality is
(1) to remove latex comments
(2) to remove \todo{} commands
(3) to merge files included from a main file into the main file
(4) to merge the bbl file into the same main file

If you make the tool executable (chmod a+x ltxclean.pl), you can use it like this:

$ ltxclean.pl main.tex > cleaned.tex

How does this work?

The tool reads in the source tex file, processes it line by line and produces some output to the standard ou…