The \ac{ALMA} telescope is composed of 66 high precision antennas, each antenna having 8 high bandwidth digitizers (4[Gsps]). It is a critical task to determine the well functioning of those digitizers before starting a round of observations. The inability to ensure digitizers’ first two significant moments (mean and variance) are within the expected values will bring two major problems, one is a \textit{big DC component} which is translated in a loss of energy for the spectral channels of interest; and \textit{loss of efficiency} due to sub-optimal DG adjustment for achieving the maximal \ac{SNR}. Since observation time is a valuable resource, it is germane that a tool is developed which can provide a quick and reliable answer regarding the digitizer status. Currently, the digitizer output statistics are measured by using comparators and counters. However, this method introduces uncertainties due to the low amount of integrations, in addition to going through all the possible states for all available digitizer time which result in the antennas taking a considerable amount of time. To avoid those aforementioned problems, a new method based on correlator resources is presented.
The research starts providing insights through the theory of the operation, where it is demonstrated how by examining the auto-correlation lags for each antenna, we can extract the mean and the variance. The study is made for two and three bits samples and it is based on four simplifications: one assuming symmetrical distribution; the second considering a zero mean; the third reducing the possible multiplication products as the auto-correlation simplify the process for estimating these outputs; and the fourth where samples can be characterized as a stationary stochastic process, where the auto-correlation is zero for a time shift different than zero.
By using the lag0 value of the auto-correlation (correlation function for zero time shift) we can then determine the variance and therefore, the standard deviation. The theoretical standard deviation can be estimated from expected ideal lag0 value, and consequently the distribution error is feasible to acquire. On the other hand, the mean is retrieved as the square root applied to the average of the second half from the collected lags (this to prevent adding the side lobes of the auto-correlation function).
These findings were implemented in a \ac{DPI} where both the lag0 value and the average of 64 lags are calculated and transferred. The \ac{DPI} firmware C code was updated for including this lag collection and transmission by means of three new \ac{CAN} protocols. A Python script, running on the client side, establishes communication with \ac{DPI} for capturing lags for generating the first two statistical moments and determine digitizer status by obtaining its offset and distribution errors. In the testing procedure, the offset (mean or first statistical moment) for the current \ac{TFB} samples retrieval is compared to the offset obtained from lag average results taken directly from the auto-correlation output. The data collected demonstrated a proper fit between both methods, where the maximum error reached a figure of 4.5\%, which is within acceptable margins. A similar exercise was prepared for studying the second statistical moment, and the curves from both approaches only differ in 1.51\%.
The results have confirmed a concordance between the trends provided by the old method and the new one developed in this work. However, it is crucial to highlight that the right way to use this method is to check first if the mean is within an acceptable range (less than 5\% of error respect to the ideal value); if the first statistical moment is out of a tolerable value, the value of the second statistical moment will be meaningless. This method will allow us to verify the tight performance of the digitizers promptly, save time for detecting arising issues while ensuring appropriate data quality.