Theory Behind The Science

Free space optical communication systems need to compensate for channel impairments such as atmospheric distortion and the frequency response of hardware components (i.e. transmitter LEDs). In order to account for such problems through an economical and lightweight method, an innovative method using deep learning was devised. Utilizing the research of O’Shea and Hoydis, we modeled the downlink as an autoencoder, an unsupervised learning device used to map high dimensional data to lower dimensions, acting similar to a compression algorithm. This allows us to maximize the channel bandwidth. Communications systems generally require mathematically tractable models. However, in the interest of lowering costs of development and deployment in production environments, it is more economically feasible to eschew traditional multiple stage communication systems in the favor of a single end-to-end deep learning system optimized for the specific hardware and channels, lowering development time and cost. As certain types of deep learning systems (specifically, recurrent neural networks) are known to be Turing complete and universal function approximators, we are able to combine the various parts of classical communication architecture into a single block and have the neural network model it in its entirety.

The cost of deep learning systems have been falling rapidly and as efficiency of GPU systems improve, it is possible to embed such a system into a Cubesat. There are current research efforts in creation of specialized ASICs (Application Specific Integrated Circuits) optimized for neural network systems, including Google’s Tensor Processing Units. In future iterations, it is possible to integrate such chips into Cubesats instead of off-the-shelf graphical processing units designed for embedded systems (in this case the Raspberry Pi on-board GPU) that will be flown on Calypso, allowing higher power efficiency and better optimization.

 

The autoencoder system will implemented with a one-hot input vector of the maximum size the Calypso’s embedded Raspberry Pi module is capable of efficiently running in flight conditions (which will be determined later empirically). Assuming an input vector of size n where n = 16, each packet of data fed into the encoder would be log2n and in this case 4 bits of data. The goal of the neural network is to minimize the difference between the theoretically achievable SNR and the actual SNR. The theoretical transmission we are modeling upon utilizes a QAM-16 transmission system with Hamming(7,4) error correction. It has already been demonstrated by O’Shea and Hoydis that such machine learning based signal processing systems are capable of performing just as well if not exceeding the efficiency of a traditional encoding and/or modulation system that operate in discrete stages.

On this premise, we have decided to train the neural network on Calypso while it is in flight instead of doing so on the ground with theoretical atmospheric loss models so as to achieve the highest efficiency in practice. The training data will be sent up via the main RF uplink and it will be used to train the optical communication system. The system proposed by O’Shea and Hoydis is implemented with the final layer of the encoder being 7 neurons, due to the choice of error correction. The output of the encoder is passed through a I-Q modulator via the Altera before being pushed into DAC. Similarly, the ADC on the downlink also has its output passed through a I-Q modulator before before being fed into the decoder system. The use of machine learning as a replacement for a traditional signal processing stack offers great flexibility as the communication system can now be updated in-flight. The self-optimizing nature also means that it would attempt to attain the best performance possible with minimum human intervention.

 

At the same time, we recognize that VLC offers an exciting opportunity in atmospheric correction technology. When multiple transmitters are used on a single link, a differential phase delay can be observed, as recognized by the work of Yu Si Yuan, which we summarize as follows.

It has been suggested that provided a large enough separation between the transmitters (in the order of multiple kilometers), the interference between the two beams can effectively correct for atmospheric scintillation. However, the differential phase delay is too small for use in atmospheric fluctuation correction on most spacecraft due to physical restraints preventing the two transmitters from being positioned a substantial distance apart. However, the use of visible light communication techniques opens up the possibility of multiple synchronized receivers due to the wide beam coverage. The Calypso payload will evaluate the use of this technique using two identical Rubidium time-synchronized receivers placed 20KM apart to record the differential delay between the two received signals. The data is then interfered and corrected digitally to recover the transmission.

Each ground station is composed of a 30cm aperture Ritchey-Chretien telescope mounted on a commercial German Equatorial mount. Tracking is achieved via a coaxial 5cm refractor and a control computer. Both of the stations will be equipped with a YAG-pumped uplink laser modulated by a DAC/FPGA setup, which also allows the interference-based correction technique to be used on the PV-cell uplink. We expect the cost of each ground station in its final configuration to be under $9000, a figure crucial for duplicability and widespread adoption to allow the construction of a ground station network. At the same time, the same hardware could be shared among multiple bands, together with laser downlink users, to spread deployment cost.

Reverse biasing photodiodes is a common technique for improving bandwidth by increasing the number of photocarriers and improving drift velocity. The use of self-biasing on solar cells has been investigated in the VLC industry. This technique has been demonstrated to improve the -3dB bandwidth of a PV cell by up to 60% utilizing a 30V bias provided by a lightweight upconverter. Moreover, it was determined that minimal energy losses were incurred as biasing recovers significant energy expenditure through increasing PV cell efficiency.

The use of the uplink as the primary communication system requires that the receiver is powered on at all times. Thus we employed a low power wake-up scheme utilizing an envelope detector and a 2MS/s COTS ADC which polls all faces for a period of 100mS each at a reduced sampling rate. This scanning process can be implemented to consume minimal standby power as demonstrated by similar implementations in commercial wake-up receivers. The ground station broadcasts a link start signal for 5 seconds. When a clock signal is recovered, the ADC selects the face with the best SNR and processes telemetry data. One of the primary goals of this payload is to investigate the feasibility of such a system as an emergency communication and reset channel. The implementation of the receiver circuitry in the EPS allows it to power cycle the spacecraft via dedicated control. At the same time, the received telemetry can be delivered to the OBC to overrides radio link data.