The heliostat collector field is the front-end of large solar power tower plants. Any negative performance impacts on the collector field will propagate down the stream of subsystems, which can negatively impact energy production and financial revenues. An underperforming collector field will provide insufficient solar flux to the receiver resulting in the receiver running at below capacity and not producing the thermal energy required for thermal storage and to run the power block at optimum efficiency. It is prudent to have an optimally operating collector field especially for future Gen3+ plants. The performance of a deployed collector field can be impacted by mirror quality (surface and shape), mirror canting errors, tracking errors, and soiling. Any of these error sources can exist during installation and further degrade over time and, if left unattended, can drastically reduce the overall performance of the plant. Concentrating solar power (CSP) plant operators require information about the collector field performance to quickly respond with corrections, if needed, and maintain optimum plant performance. This type of fast response is especially critical for future Gen3+ plants, which require high collector field performance consistently. However, power tower operators have struggled with finding or developing the right tools to assess and subsequently fix canting errors on in-field heliostats efficiently and accurately. Sandia National Laboratories National Solar Thermal Test Facility (NSTTF) is developing an aerial imaging system to evaluate facet canting quality on in-situ and offline heliostats. The imaging system is mounted on an unmanned aerial system (UAS) to collect images of targets structures in reflection. Image processing on the collected images is then performed to get estimates of the heliostat canting errors. The initial work is to develop the system definition that achieves the required measurement sensitivities, which is on the order of 0.25-0.5 mrad for canting errors. The goal of the system is to measure heliostat canting errors to <0.5 mrad accuracy and provide data on multiple heliostats within a day. In this paper, the development of the system, a sensitivity analysis, and initial measurement results on two NSTTF heliostats are provided.
Borrowing from nature, neural-inspired interception algorithms were implemented onboard a vehicle. To maximize success, work was conducted in parallel within a simulated environment and on physical hardware. The intercept vehicle used only optical imaging to detect and track the target. A successful outcome is the proof-of-concept demonstration of a neural-inspired algorithm autonomously guiding a vehicle to intercept a moving target. This work tried to establish the key parameters for the intercept algorithm (sensors and vehicle) and expand the knowledge and capabilities of implementing neural-inspired algorithms in simulation and on hardware.
Sensors continue to decrease in size and power. This report presents results of a market survey conducted in February 2020 for commercial off-the-self sensors with optimal size, weight, and power to be carried onboard a small unmanned aircraft system. For this report, Sandia National Laboratories considered sensors that can detect an object in three dimensions. The sensors that were researched are broken into three categories: radio detection and ranging sensors, stereo camera sensors, and light detection and ranging sensors.
Unmanned aircraft systems (UASs) have grown significantly within the private sector with ease of acquisition and platform capabilities far outstretching what previously existed. Where once the operation of these platforms was limited to skilled individuals, increased computational power, manufacturing techniques, and increased autonomy allows inexperienced individuals to skillfully maneuver these devices. With this rise in consumer use of UAS comes an increased security concern regarding their use for malicious intent.The focus area of counter UAS (CUAS) remains a challenging space due to a small cross-sectioned UAS's ability to move in all three dimensions, attain very high speeds, carry payloads of notable weight, and avoid standard delay techniques.We examine frequency analysis of pixel fluctuation over time to exploit the temporal frequency signature present in UAS imagery. This signature allows for lower pixels-on-target detection [1]. The methodology also acts as a method of assessment due to the distinct frequency signatures of UAS when examined against the standard nuisance alarms such as birds. The temporal frequency analysis (TFA) method demonstrates a UAS detection and assessment method. In this paper we discuss signal processing and Fourier filter optimization methodologies that increase UAS contrast.
Using internal investment funds within Sandia National Laboratories’ (SNL) Division 6000, JUBA was a collaborative exercise between SNL Orgs. 6533 & 6913 (later 8863) to demonstrate simultaneous flights of tethered balloons and UAS on the North Slope of Alaska. JUBA UAS and tethered balloon flights were conducted within the Restricted Airspace associated with the ARM AMF3 site at Oliktok Point, Alaska. The Restricted Airspace occupies a 2 nautical mile radius around Oliktok Point. JUBA was conducted at the Sandia Arctic Site, which is approximately 2 km east-southeast of the AMF3. JUBA activities occurred from 08/08/17 – 08/10/17. Atmospheric measurements from tethered balloons can occur for a long duration, but offer limited spatial variation. Measurements from UAS could offer increased spatial variability.
This report contains the results of a research effort on advanced robot locomotion. The majority of this work focuses on walking robots. Walking robot applications include delivery of special payloads to unique locations that require human locomotion to exo-skeleton human assistance applications. A walking robot could step over obstacles and move through narrow openings that a wheeled or tracked vehicle could not overcome. It could pick up and manipulate objects in ways that a standard robot gripper could not. Most importantly, a walking robot would be able to rapidly perform these tasks through an intuitive user interface that mimics natural human motion. The largest obstacle arises in emulating stability and balance control naturally present in humans but needed for bipedal locomotion in a robot. A tracked robot is bulky and limited, but a wide wheel base assures passive stability. Human bipedal motion is so common that it is taken for granted, but bipedal motion requires active balance and stability control for which the analysis is non-trivial. This report contains an extensive literature study on the state-of-the-art of legged robotics, and it additionally provides the analysis, simulation, and hardware verification of two variants of a proto-type leg design.
This report summarizes the analytical and experimental efforts for the Laboratory Directed Research and Development (LDRD) project entitled ''Advancements in Sensing and Perception using Structured Lighting Techniques''. There is an ever-increasing need for robust, autonomous ground vehicles for counterterrorism and defense missions. Although there has been nearly 30 years of government-sponsored research, it is undisputed that significant advancements in sensing and perception are necessary. We developed an innovative, advanced sensing technology for national security missions serving the Department of Energy, the Department of Defense, and other government agencies. The principal goal of this project was to develop an eye-safe, robust, low-cost, lightweight, 3D structured lighting sensor for use in broad daylight outdoor applications. The market for this technology is wide open due to the unavailability of such a sensor. Currently available laser scanners are slow, bulky and heavy, expensive, fragile, short-range, sensitive to vibration (highly problematic for moving platforms), and unreliable for outdoor use in bright sunlight conditions. Eye-safety issues are a primary concern for currently available laser-based sensors. Passive, stereo-imaging sensors are available for 3D sensing but suffer from several limitations : computationally intensive, require a lighted environment (natural or man-made light source), and don't work for many scenes or regions lacking texture or with ambiguous texture. Our approach leveraged from the advanced capabilities of modern CCD camera technology and Center 6600's expertise in 3D world modeling, mapping, and analysis, using structured lighting. We have a diverse customer base for indoor mapping applications and this research extends our current technology's lifecycle and opens a new market base for outdoor 3D mapping. Applications include precision mapping, autonomous navigation, dexterous manipulation, surveillance and reconnaissance, part inspection, geometric modeling, laser-based 3D volumetric imaging, simultaneous localization and mapping (SLAM), aiding first responders, and supporting soldiers with helmet-mounted LADAR for 3D mapping in urban-environment scenarios. The technology developed in this LDRD overcomes the limitations of current laser-based 3D sensors and contributes to the realization of intelligent machine systems reducing manpower need.