Data on board the future PLANCK Low Frequency instrument (LFI), to measure the Cosmic Microwave Background (CMB) anisotropies, consist of N differential temperature measurements, expanding a range of values we shall call R. Preliminary studies and telemetry allocation indicate the need of compressing these data by a ratio of c(r) greater than or similar to 10. Here we present a study of entropy for (correlated multi-Gaussian discrete) noise, showing how the optimal compression c(r),(opt), for a linearly discretized data set with N-bits = log(2) N-max bits is given by: C-r similar or equal to N-bits/log(2)(root 2 pi e sigma (e)/Delta), where sigma(e) = (detC)(1/2N) is some effective noise rms given by the covariance matrix C and Delta = R/N-max is the digital resolution. This Delta only needs to be as small as the instrumental white noise RMS: Delta similar or equal to sigma(T) similar or equal to 2mK (the nominal mu K pixel sensitivity will only be achieved after averaging). Within the currently proposed N-bits = 16 representation, a linear analogue to digital convertor (ADC) will allow the digital storage of A large dynamic range of differential temperature R = N(max)Delta accounting for possible instrument drifts and instabilities (which could be reduced by proper on-board calibration). A well calibrated signal will be dominated by thermal (white) noise in the instrument: sigma(e) similar or equal to sigma(T), which could yield large compression rates c(r,opt) similar or equal to 8. This is the maximum lossless compression possible. In practice, point sources and 1/f noise will produce sigma(e) > sigma(T) and c(r,opt) < 8. Thin strategy seems safer than non-linear ADC or data reduction schemes (which could also be used at some stage).