- Automatic vectorisation of aerial images
- Compression palette pseudocolor images
- Lossles image compression
- Significant Speed up of Image Processing Based on n-Dimensional Differential Reprezentation
- Solving Next Best View problem for camera in the robot hand

Image compression, processing aerial images, vectorization.

This project solve conversion of raster images into vector representation. Processing consists from these steps:

- Image
- Edge points
- Node points
- Vector representation

The algorithm is designed for vectorisation of edges of adjacent domains. The input picture is preclassified aerial image by neural network. There are homogenous domains with same labelling (which is represented by a color).

The new method for lossless image compression of grey-level images is proposed. The image is treated as stacked bit planes. Its compressed version is represented as residuals of a~non-linear local predictor spanning from the representative point in the current bit plane and a~few neighbouring ones. Predictor configurations are grouped into couples that differ in one bit in the representative point only. The occurrence frequency of predictor configurations is checked in the input image. The predictor adapts automatically to the image, it is able to estimate the influence of cells in the neighbourhood and thus copes even with complicated structure or fine texture.

Residuals between original and predicted image are those that correspond to the less frequent predictor configurations. Effectively coded residuals constitute the output image. To our knowledge, the proposed compression method supersedes methods of others in performance.

Next topic relates to lossless compression of pseudocolor images (images with a~palette). The proposed method is a~preprocessing step preceeding actual compression. Indices in the palette are semioptimally permuted during preprocessing. For actual image compression, our own nonlinear predictor based method is used~\cite{HlavacFojtikCAIP97} but the proposed invisible palette modification is relevant to most of other compression techniques too. Experiments with numerous images show that indices reordering in the palette yields data savings from 10 to 50 \% for typical images.

We suggest a~preprocessing phase that (a) analyses statistics of the adjacency relations of index values, (b) performs optimization, and (c) permutes indices to palette to achieve more smooth image. The smoother image causes that the lossless image compression methods yield less output data. The task to optimally permute indices is a~NP complete combinatorial optimization. Instead of checking all possibilities, we propose a~reasonable initial guess and a~fast suboptimal hill climbing optimization.

The last part of my master thesis proposes set of novel methods for
**fast manipulation** with *n* dimensional binary data. The main trick is
not to process image as raster of pixels but its **differences**. It
is possible to apply following operations on compressed image: AND, OR,
XOR, NOT, shift left, shift right. The complexity does not depend on
the size of the image but on the number of nonzero differences only. The
speedup can be significant in many cases.

This work is based on research of Prof. Schlesinger from Ukraine.
I extend this approach to *n* dimensions. New differential image tool is
**strictly hierarchical**. This feature eases maintaining the
software. New procedures for the dimension *n* can be build on
procedures from lower dimensions only. Three dimensional tool may be
useful e.g. in processing tomographic data.

We aimed to reconstruct a shape (or volume) of the body. Capturing of the voulume is made by Range-Finder, that is palced in the robot hand. The Range-Finder has in this configuretion 6 degrees of fteedom. We attempt to minimize a price of measurement. It consists from number of measurements that is required for reconstruction.

For fulfilling this task, we should plan places of Range-Finder for capturing measurements. The place location algorithm uses apriori information bout model and moreover it must use all measured data from all previous measurements. The finishing point, when algorithm stops, is also a part of algorithm.