Skip to content

Commit

Permalink
begin structuring website
Browse files Browse the repository at this point in the history
  • Loading branch information
statespacedev committed Apr 1, 2024
1 parent 0dc681c commit 2d73b75
Show file tree
Hide file tree
Showing 2 changed files with 29 additions and 4 deletions.
31 changes: 28 additions & 3 deletions docs/notes.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,27 @@
[1990](https://photos.app.goo.gl/vKBxieTbwsbmshCg8)
the 'lost in space' problem - [photos](https://photos.app.goo.gl/ifuTJUNsaRJK21E79){:target="_blank" rel="noopener"}
------------------------------

a vehicle in deep space accidentally tumbles out of control, losing power and shutting down. after a while, emergency systems regain power and stop the tumbling. the vehicle's star tracker snaps an image. what an onboard computer needs to do is match the stars in the image with entries in a star catalog. it's then straightforward to determine the vehicle's orientation in space.

it turns out that the essence of the problem doesn't focus on two dimensional images. instead, the focus is on three dimensional unit vectors. true, for unit vectors one of the three coordinates is redundant. given the other two, the unit constraint determines the third. but each star is essentially 'purely a direction' in the star tracker's reference frame. purely a pointing vector. the image encodes a set of unit vectors pointing out of an origin in various directions. with appropriate knowledge of the star tracker's characteristics, a set of three dimensional unit vectors are immediately extracted from the image.

this parallels the nature of star catalogs. star catalogs are collections of three dimensional unit vectors in an agreed on celestial reference frame. these days, the celestial frame is tied to the pointing directions to quasars, astronomical objects distant enough to be effectively 'fixed in space'. fixed landmarks as it were. star catalogs also provide names, brightnesses, etc, but here the core information is 'purely pointing direction'.

so the lost in space problem is about matching a set of unit vectors in a sensor reference frame with a set of unit vectors in a star catalog reference frame. the unit vectors can point towards any patch of directions 'on the sky'. it's really about the whole sphere of the night sky, that's one of the core elements here.

for matching the two sets of unit vectors, the key characteristics are the angular separations. angular separations of unit vectors immediately connects with a core operation in numerical computing, the dot product, summing the products of unit vector components. and these angular separations concern the mutual relationships between pairs of unit vectors, so the focus becomes pairs of star vectors, and then triplets of pairs. it's a bit of linguistic abuse, but triplets of pairs as can be thought of as 'triangles', and pairs can be thought of as 'sides'.

early history
------------------------------

key elements of lost in space are an automated star tracker mounted on a vehicle, images of the night sky, star catalogs for interpreting the images, and computing to bring them all together. the earliest appearance of all these elements seems to be with project febe in 1948, immediately after ww2. it seems likely this was both the earliest date that the sensing and computing were available, and that there was sufficient motivation to make the attempt. the motivations concern inertial guidance and navigation and immediately connect with a range of fascinating topics. to reduce distractions, the focus will be kept on the lost in space problem here, as close to star trackers and the night sky as possible. inertial sensors generally mean forknowledge of what part of the sky is in view, which is the opposite of pure lost in space.

early systems, such as project febe, generally locked on to and tracked single stars. for true lost in space, an imaging star tracker with a ccd style pixelized sensor is needed, and those became available in the seventies and eighties. a related situation arose earlier though, namely, interpreting photographic plates from astronomical telescopes. here's an example that can be referred to as the 'asteroid problem'.

reflected sunlight from asteroids is dim because of their small size and distance, so large telescopes are needed to gather enough light. the resulting images cover a relatively small patch of sky and lots of faint stars. definitely a smaller and fainter patch of sky than normal for the naked eye. the question can arise, what's the star catalog entry for one of the faint stars in an asteroid image? for the lost in space problem, the field of view and star brightness are relatively large, limiting the number of potential stars to thousands. for the asteroid problem, the small field and brightness increases that number to tens of thousands or more. but at the same time, the pointing of the astronomical telescope is known, and the effective star catalog can be reduced to dim stars near the telescope's pointing vector.

1990 - [photos](https://photos.app.goo.gl/vKBxieTbwsbmshCg8){:target="_blank" rel="noopener"}
-------------------------------------------------------------

summer of ninety - the university of texas at austin astronomy department - in recent months the hubble space telescope had finally reached orbit, and the berlin wall had fallen - rent was less than two hundred a month, just a short walk north of campus - martha ann zively, the eighty-three year old landlady, lived directly overhead, and mobile phones, notebook computers, and the web were all somewhere over the horizon - home internet was a dialup modem into the universities access point. since the previous fall, work meant the hubble space telescope astrometry team - a group with members from the astronomy department, mcdonald observatory, the aerospace engineering department, the center for space research, and the european space agency and its hipparcos project - paul hemenway, an astronomer involved with all those organizations, was both mentor and friend. hubble was designed for very exact and stable pointing to minimize motion smear in its images - three optical interferometers were mounted on robotic arms in the hubble’s focal plane to provide feedback to the pointing control system - these fine guidance sensors were a cutting-edge solution given seventies and eighties technology, with its uneasy mix of the analog and digital eras. exact calibrations were needed on-orbit to make the whole complex system work as intended.

Expand All @@ -18,15 +41,17 @@ the plates were roughly the size and shape of writing paper - the glass was fair

the workstation was a tall rack standing in the back corner and mounting a mini-fridge sized early sun box - on a table beside the rack was an extremely heavy old crt monitor showing one of the first primitive unix graphical user interfaces, the sunview precursor to x windows - this machine already had the antiquated feel of an earlier era. a scanning session meant creating a set of digitized raster files, one file for each trail scanned by the pds, archived on 9-track half-inch tape - a group of files, say thirty to fifty for a plate with a good exposure and lots of stars, was created in the filesystem of the workstation and then written to tape using its sibling above on the sixteenth floor, which had the tape drive - the shift over the border from analog to digital took place in the seventies style electronics connecting the pds to the workstation. a few days after scanning those first plates - paul and ray duncombe discussed the next steps in wrw, the aerospace building. there's a clear memory of the short walk from rlm to wrw - stopping in the texas sun - overhead was the typical hard blue summer sky with little white clouds, and sweat running down just seconds after stepping outside the air conditioning - the thunderbolt question has struck from a clear sky - exactly which stars were on those plates? how could those stars really, in practice, be determined, in order to determine the position of the asteroid? was there a program on the astronomy or aerospace computers to do that? the answer was, no - there wasn’t an easy or obvious solution, and helping to figure out a practical method of identifying those stars on those particular plates was the real job - not that an undergrad had any chance of even beginning to find a real solution, but even beginning to be aware of and recognize the magnitude of the problem was a huge step forward - how did one go about recognizing stars - humans could do it, but could an eighties computer system?

[2003](https://photos.app.goo.gl/ng8Nbxra2RYrbeWA7)
2003 - [photos](https://photos.app.goo.gl/ng8Nbxra2RYrbeWA7){:target="_blank" rel="noopener"}
-------------------------------------------------------------

thirteen years later, the boss for the next eleven years was bob schutz - working in aerospace and the icesat group at the center for space research, mostly on star trackers - modern descendents of maritime sextants for celestial navigation - along with inertial sensors, often referred to simply as gyros. the problems once again, at root, concerned images containing a scattering of unknown stars - within aerospace, it’s a classic problem with a memorable name - the lost in space problem. given an image of some stars, exactly which stars are they? aerospace has its own perspectives, culture, and tools - astronomers don’t generally think in terms of three-dimensional unit vectors, rotation matrices, quaternions, and vector-matrix notation - it was very quickly apparent that the concerns and methods in aerospace were more widely applicable than those in astronomy - bringing together optimization, control, data fusion, high performance computing, and nn to solve practical real-world problems. within weeks of beginning, star identification was again one of the top concerns - and once again the first question was whether a practical solution was already available. pete shelus from the hubble astrometry days was a member of the group and pointed out useful directions - there was a strong sense of continuity and awareness that here was a problem that really needed addressing - the obvious differences now were that computing hardware was more powerful, and digital imaging had become standard - there was no longer an analog to digital divide to cross - everything was already in binary.

icesat’s control system usually made it straightforward to predict which stars each image contained - this wasn’t obvious or straightforward at first and it took effort and thought to really understand the data coming from the spacecraft - there were four star imagers of three different hardware-types onboard, all sampling at ten hertz or more - these were classic eighties star trackers and didn't provide star identifications. there was also higher-frequency angular-rate data from the inertial unit, and tracking data from the control system - so a pointing vector could usually be estimated for each star-image, and it was usually enough to check whether star-images with appropriate brightnesses were near their predicted positions. brightness information tends to muddy the star identification problem because it’s relatively difficult to measure and predict for a particular imager - images have better geometric information than brightness information - an astronomer interested in brightness does photometry with dedicated sensors, not with imagers. an additional check was that angles between observed star pairs matched predictions, and one of the first objectives was to model errors in these angles from flight data - focusing on star pairs is a big step in the direction of looking at star triangles and patterns.

it turned out there's a fascinating, though relatively small, literature on star identification and related topics - by the second world war, many large aircraft had a bubble window facing upward for a navigator to make stellar observations - after the war, computing and imaging automated the process. the cold war brought new motivations for the technology - many people became uneasily aware of guidance systems, and while most of the massive efforts went into integrated circuits and inertial guidance sensors, automated star tracking quietly matured in parallel. star trackers are critical for spacecraft, and are used on high altitude aircraft and missiles - the classical period was the sixties through the eighties. surprisingly though, it soon became clear that there was still no publicly-available software package for the lost-in-space star identification problem - apparently, each time star identification software had been developed, it’s been within classified or industry projects. if you were seriously interested in star identification, you probably wanted to sell star trackers - that’s a fairly mature industry now.

[2016](https://photos.app.goo.gl/z54G7X9dEop1e81y6)
2016 - [photos](https://photos.app.goo.gl/z54G7X9dEop1e81y6){:target="_blank" rel="noopener"}
-------------------------------------------------------------

another thirteen years passed - excitement was growing again, after the ai winter following the eighties, around advances in neural networks - especially at google, which had just open sourced tensorflow. for a number of reasons, it was clearly time to tackle the problem directly, using both geometric and nn methods in parallel where possible - the concept was to start from scratch as a github open source project, integrating tensorflow from the beginning. this meant working in c++ eigen and python numpy - the only external input was to be a list of star positions, and nasa’s skymap star catalog was an ideal source. skymap was created in the 90s specifically for use with star trackers - we’d used it extensively for icesat, even collaborating where possible with its creators. when hubble was launched, one of its early problems was bad guide stars. as part of the overall hubble recovery effort, nasa pushed skymap forward as an improved description of the sky, as seen by standard star trackers.

Expand Down
2 changes: 1 addition & 1 deletion readme.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

lo-fi star identification

[sky](https://github.com/statespacedev/starid/blob/dev/starid/sky/sky.h) - generates three-dimensional sky models and two-dimensional images from the nasa skymap star catalog. finds stars near arbitrary points on the sky. [data](https://github.com/statespacedev/starid/blob/dev/data/) - the full NASA SKYMAP2000 V5R4 star catalog. [references](https://github.com/statespacedev/starid/blob/dev/docs/readme.md) - articles relating to star identification.
[background](https://statespacedev.github.io/starid/docs/notes.html) - random discussion of where this all comes from. [data](https://github.com/statespacedev/starid/blob/dev/data/) - the full NASA SKYMAP2000 V5R4 star catalog. [references](https://github.com/statespacedev/starid/blob/dev/docs/readme.md) - articles relating to star identification.

[star triangles](https://github.com/statespacedev/starid/blob/dev/libstarid/startriangleidentifier.h) - in NOMAD star recognition, there's a chain of triangles and basesides. side2 of each triangle is the baseside of the following triangle. during feedback, these shared side2 and baseside pairs are the path for information to flow backwards, increasing the constraints on the initial triangle baseside and basestar. the name NOMAD relates to how the chain of triangles wanders away from the target star and initial triangle. in SETTLER star recognition, the target star is always star a. star b is a neighbor star. in the inner loops, additional stars c and d are involved. first an abca triangle is formed. this constrains the abside. then for an abca triangle, a sequence of abda triangles are formed, further constraining the abside. when we reach an abda that eliminates all but one star pair possibility for the abside, we've recognized the target star. the name SETTLER comes from the idea that we never move away the target star, we're settling around it.

Expand Down

0 comments on commit 2d73b75

Please sign in to comment.