Skip to content

Commit

Permalink
Merge branch 'philip/1.1prep_writerside' of https://github.com/freemo…
Browse files Browse the repository at this point in the history
…cap/documentation into philip/1.1prep_writerside
  • Loading branch information
philipqueen committed Feb 20, 2024
2 parents 9a97645 + 46e0fb6 commit 403862c
Show file tree
Hide file tree
Showing 7 changed files with 48 additions and 71 deletions.
50 changes: 5 additions & 45 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,54 +13,14 @@ ___

# Style Guide

This is a work in progress :)
This is a work in progress :)

The goal is to create a consistent style across all parts of the docs to create what I, Trent, am calling "semantic continuity" in how we communicate to our users. Semantic continuity means that the meaning of various symbols and text formatting is consistent across our documentation.
- This could be as small as making sure that there are periods at the end of every sentence, even in bullet points.
- Or it could be as big as making sure that [Admonitions](https://squidfunk.github.io/mkdocs-material/reference/admonitions/), or "call-outs", are used in the same way across the docs.

## [Admonitions](https://squidfunk.github.io/mkdocs-material/reference/admonitions/) - AKA Call-Outs
Our docs currently employ `6` types of Admonitions. The four which we have customized which can be found in `/stylesheets/extra.css`, and the other standard styles of admonitions can be found in [here](https://squidfunk.github.io/mkdocs-material/reference/admonitions/#supported-types).

Each of the Admonitions have a unique aesthetic and serve a specific purpose:

`tip-full-width`
- Description: Blueish box with white sparkle skelly icon. This is a custom Admonition.
- Purpose: Provide freemocap-specific tips, for example, in "How to process pre-recorded synchronized videos with `alpha` GUI", the `tip-full-width` helps folks who are trying to post-process videos they had recorded with using the `pre-alpha`.

![tip-full-width](https://user-images.githubusercontent.com/62706609/202797425-e0d0ec64-7752-4aeb-a76e-a39dd671be9f.png)
> Developer Note: `tip-full-width` should be renamed.
`take-note`
- Description: Greenish box with a dark green pencil icon. This is a custom Admonition.
- Purpose: Provide useful notes that are relevant to the page that the viewer is on. This call-out is important for helping folks stay on the ["happy path"](https://en.wikipedia.org/wiki/Happy_path) of getting the task they came here to do, done.

![take-note_admonition](https://user-images.githubusercontent.com/62706609/202797519-31d45917-568c-40f4-9978-a6f2a2205177.png)
> `2024-02-15` We're currently in the process of switching from Material MkDocs to WriterSide for our documentation tool. To avoid confusion, we're deleting content from this README that specifically was tied to Material MkDocs, and will fill out our style guide in the future with information that is relevant to our doc style in WriterSide.
`blender`
- Description: Creamsicle-orange box with black [Blender](https://www.google.com/search?client=firefox-b-1-d&q=blender) logo. This is a custom Admonition.
- Purpose: Provide information connecting freemocap to Blender. This could be linking to a Blender tutorial, or linking to the Blender download page. Blender is cool!

![blender_admonition](https://user-images.githubusercontent.com/62706609/202797532-8cf8f03b-14de-4725-9c5d-933a70329000.png)

`finished`
- Description: Bright-pink box with a white happy skull icon. This is a custom Admonition.
- Purpose: The purpose of this Admonition is to congratulate users for finishing a How-To or a Tutorial, and provide them with next steps.

![finished_admonition](https://user-images.githubusercontent.com/62706609/202797567-d03ec289-8e66-4577-b831-1cd4ab4f6374.png)

`info`
- Description: Bright blue box with a bright blue circle with the letter `i` cut out as the icon. This Admonition is one of the default call-outs from `Material for MkDocs`.
- Purpose: To provide the user with tangential, but interesting information that could be useful the the task at hand. If opened, this could easily lead the user down a rabbit hole; `info` admonitions should always be presented closed & openable by the user, implementing them in the `.md` files with `???` instead of the standard `!!!` for an open admonition.

![info_admonition](https://user-images.githubusercontent.com/62706609/202797755-e8bd7d78-5ddd-412a-8084-b0e5e20e9186.png)

`warning`
- Description: Bright orange box with an orange triangle with an `!` inside of it. This Admonition is one of the default call-outs from `Material for MkDocs`.
- Purpose: We use this call-out to clearly warn users of potential pitfalls. If this were a perfect software, we probably wouldn't have these. But it's not! And that's okay :) These call-outs will save us issue-posts.
- Styling note: the shown text for warnings should always being with `Warning:`, and be proceeded by a detailed explanation that clearly tells the user what to avoid, or make sure they do.
The goal is to create a consistent style across all parts of the docs to create what Trent is calling "semantic continuity" in how we communicate to our users. Semantic continuity means that the meaning of various symbols and text formatting is consistent across our documentation.
- This could be as small as making sure that there are periods at the end of every sentence, even in bullet points.
- Or it could be as big as making sure that admonitions or "call-outs", are used in the same way across the docs.

![warning_admonition](https://user-images.githubusercontent.com/62706609/202797732-7d7e8556-5d9e-4008-907e-bce50fb272c6.png)



Expand Down
19 changes: 12 additions & 7 deletions docs/Writerside/topics/getting_started/multi_camera_calibration.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,13 @@
# Multi-Camera Calibration Guide
> [Check out this video for more information and directed guidance in the calibration process](https://youtu.be/GxKmyKdnTy0?t=1615)
> {style="tip"}

**Note:** This calibration process describes the use of an anipose-based calibration method. We will soon be updating our method to use a more flexible and interactive interface.
> This calibration process describes the use of an anipose-based calibration method. We will soon be updating our method to use a more flexible and interactive interface.
{style="note"}

<procedure title="Video Guidance" collapsible="true">

[Check out this video for more information and directed guidance in the calibration process](https://youtu.be/GxKmyKdnTy0?t=1615)

</procedure>

## Preparing the Charuco Board
To perform a multi-camera calibration, you'll need to print out a [Charuco board image](https://github.com/freemocap/freemocap/blob/main/freemocap/assets/charuco/charuco_board_image.png).
Expand All @@ -12,7 +17,7 @@ For smaller spaces, a simple printout from a standard printer should work just f
For larger spaces, you might need to print this on a larger poster board so that it can be seen well by the cameras.

## Setting up Cameras
To get a multiple camera recording, you'll need multiple cameras set up and connected to your computer. There's detailed instructions on multiple camera setups in the [Detailed Setup Guide](../resources/detailed_setup.md), but for now it will suffice to have two or more (three or more is best) cameras connected directly to your camera. We don't recommend using a USB hub to connect cameras. The cameras should be set up so they all see the subject at the same time, and have a 40-60 degree angle between each camera from the subject's viewpoint.
To get a multiple camera recording, you'll need multiple cameras set up and connected to your computer. There's detailed instructions on multiple camera setups in the [Detailed Setup Guide](detailed_setup.md), but for now it will suffice to have two or more (three or more is best) cameras connected directly to your camera. We don't recommend using a USB hub to connect cameras. The cameras should be set up so they all see the subject at the same time, and have a 40-60 degree angle between each camera from the subject's viewpoint.

## Recording Calibration Videos
In the camera view section of the GUI, select the calibration videos option:
Expand All @@ -26,9 +31,9 @@ For more information about how to use the board to get a high quality calibratio
## Processing the Calibration
Once you have given each camera a good view of the board shared with another camera, click "Stop Recording," and it will begin the calibration process automatically.

!!! tip
Be sure to keep an eye on the terminal that launched the GUI for more helpful output, as the log at the bottom of the GUI screen does not display every detail.

> Be sure to keep an eye on the terminal that launched the GUI for helpful output, as the log at the bottom of the GUI screen does not capture all of the outputs yet.
> *Note: The terminal only launches in this way on Windows*.
{style="note"}

## Recording Motion Capture Videos

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -19,10 +19,12 @@ Launch FreeMoCap from the terminal by activating the relevant Python environment
## Camera Detection
The software should locate your cameras, and once they're connected, it will show a viewpoint from the connected camera in the GUI. You can adjust the settings in the sidebar and then click "Apply Settings to Cameras" to apply them.

> The most important setting to look at right now is the exposure setting, which you should make as low as possible to decrease blur. We generally like to keep it below -6. Adjust it downwards until the image looks crisp, which will probably make it look slightly darker than you would normally expect.
>
> For this simple single-camera recording, this isn't a crucial step. As long as you can see yourself in the image, you should be tracked okay, but it's good to keep in mind for the future.
> {style="tip"}
<procedure title="Tips: Exposure and Framing" collapsible="true">

> Prioritize proper exposure for the best video quality. Start by lowering your exposure setting (ideally below -6) to reduce blur and create a crisp image. This may initially make the image appear slightly darker than expected. While perfect framing isn't critical for a simple single-camera setup, ensure you're visible within the frame. Keeping good framing practices in mind will be beneficial for future, more complex recording scenarios.
{style="note"}

</procedure>

## Recording
Because you're doing a single camera recording, you don't need to do any calibration. But when you do graduate to multi-camera recordings, this is where you would get out a Charuco board and run a calibration first. We're all clear to record our motion capture for now though.
Expand Down
32 changes: 20 additions & 12 deletions docs/Writerside/topics/resources/detailed_setup.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,40 +4,48 @@ While FreeMoCap can record anywhere that's big enough to capture your entire bod

Don't be afraid to start small and simple - any set up is better than no set up. You can always add complexity as you become more comfortable with the system.

## [Video Tutorial](https://www.youtube.com/watch?v=GxKmyKdnTy0&t=872s)
## Video Tutorial

This video uses an older version of the software, but the discussion of hardware, lighting, and camera placement is still relevant (use the timestamps to jump to specific sections).
[This video](https://www.youtube.com/watch?v=GxKmyKdnTy0&t=872s) uses an older version of the software, but the discussion of hardware, lighting, and camera placement is still relevant (use the timestamps to jump to specific sections).

## Lighting Conditions

Lighting is crucial for a camera-based system like FreeMoCap. For best results, use bright environments, such as near open windows during the day. Be cautious, as environments that appear bright to our eyes may be quite dim (Human eyes are exceptionally good at adapting to different lighting conditions).

!!! tip-full-width "Exposure Settings"
When setting the exposure on the GUI, aim for a setting of at most `-6`, but better results can be achieved at `-7` or `-8` or lower.
A short exposure time means that the camera's (digital) shutter will be open for a shorter time on each recorded frame. That means that your subject will have less time to move during the exposure, which will reduce motion blur and make it easier to identify the subject's joints in each frame.

On a technical level, **we want enough light in our scene that we can set our camera's Exposure times to be as short as possible**.
<procedure title="Exposure Settings" collapsible="true">

A short exposure time means that the camera's (digital) shutter will be open for a shorter time on each recorded frame. That means that your subject will have less time to move during the exposure, which will reduce motion blur and make it easier to identify the subject's joints in each frame.
When setting the exposure on the GUI, aim for a setting of at most `-6`, but better results can be achieved at `-7` or `-8` or lower.

On a technical level, **we want enough light in our scene that we can set our camera's Exposure times to be as short as possible**.

</procedure>

??? info "More on Exposure Settings"
For information on how these camera exposure settings map onto actual time measurements, see [this article](https://www.kurokesu.com/main/2020/05/22/uvc-camera-exposure-timing-in-opencv/).
<procedure title="More on Exposure Settings" collapsible="true">

For information on how these camera exposure settings map onto actual time measurements, see [this article](https://www.kurokesu.com/main/2020/05/22/uvc-camera-exposure-timing-in-opencv/).
{style="note"}

</procedure>

## Background Considerations

A solid-colored backdrop is not strictly necessary, but using one can improve the quality of recordings. Covering complex visual textures in the camera's field of view makes tracking the human in the scene easier, leading to better outcomes. Start with a simple setup first, and add complexity once you've established that the simple version works.

!!! tip-full-width "Make Your Motion Capure Stand Out"
Having visual contrast between your subject and the background will help your subject stand out in recordings.
Having visual contrast between your subject and the background will help your subject stand out in recordings. Example: wear dark clothing against a light background.

Similarly, it is better for subjects to wear tight-fitting clothes that expose joints like elbows and knees. While not strictly necessary, it will lead to better quality recordings. Consider the machine's perspective: if the subject is wearing baggy pants and the software is asked to identify the knee, it will struggle. It may produce an estimate close to the correct position, but tracking the knee will be easier if it is visible or if the subject is wearing tight-fitting pants.

## Camera Placement and Configuration

Camera placement is critical. Ensure that the subject is in full view of each camera. In multi-camera setups, ensure that at least two, preferably three, cameras can see every part of the subject's body at any given time.

!!! tip-full-width "Working In A Tight Space"
Cameras can be rotated 90° to portrait orientation to capture a standing human in a tighter space. You can even mix portrait and landscape views together to better capture your space.
<procedure title="Working with small spaces" collapsible="true">

Cameras can be rotated 90° to portrait orientation to capture a standing human in a tighter space. You can even mix portrait and landscape views together to better capture your space.

</procedure>

Ideally, the person should occupy as much of the camera screen as possible, providing more information for the software to track the person. Most testing has been done on low-quality webcams, although successful recordings have been made with GoPros, DSLRs, and mobile phones.

Expand Down
2 changes: 1 addition & 1 deletion docs/Writerside/topics/resources/multiprocessing.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,4 +3,4 @@
The multiprocessing option allows you to process multiple videos at the same time, decreasing the total processing time. Each process runs in a separate CPU core. You can set the number of CPU cores to use by changing the "Max Number of Processes to Use" option in the "2d Image Trackers" parameter group. The default is the maximum number of cores on your machine minus one, to account for the process the GUI is running in. In the example below, the machine has 8 cores total, and 7 show up as available. Freemocap will never create more processes than the number of videos in your active recording. If you would like to turn off multiprocessing entirely, you can set the "Max Number of Processes to Use" to one.


![MultiProcessing Parameter Screenshot](multiprocessing_crop_detail.png)
![MultiProcessing Parameter Screenshot](multiprocessing_crop_detail.png){ width="450" }
4 changes: 3 additions & 1 deletion docs/Writerside/topics/resources/reprojection_filtering.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,8 @@
# Reprojection Error Filtering

Reprojection error filtering is an optional postprocessing step to the 3D triangulation stage of processing. It retriangulates outlier data from the 3d triangulation with the cameras contributing the most error removed. Reprojection error filtering is most effective when there is poor skeleton detection in one or more camera views. It is turned on and off in "Process Data" tab with the checkbox "Run Reprojection Error Filtering". It is turned off by default.

![Detail of Reprojection Error Filtering Options](reprojection_filtering_crop_detail.png)
![Detail of Reprojection Error Filtering Options](reprojection_filtering_crop_detail.png){ width="450" }

## What is Reprojection Error?
"Reprojection error" is the distance between the originally measured point (i.e. a joint on the 2d skeleton) and the reconstructed 3d point reprojected back onto the original image. The intuition is that if the 3d reconstruction and original 2d track are perfect, then reprojection error will be zero. If it isn't, then there is some inaccuracy in either: the original 2d tracks (i.e. bad skeleton detection from one or more cameras), in the 3d reconstruction (i.e. bad camera calibration), or a combination of the two.
Expand Down
2 changes: 1 addition & 1 deletion docs/Writerside/topics/resources/yolo_cropping.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

YOLO cropping is a processing option that passes each video frame through YOLO v8's object detection model to find the most likely area of the person in the frame before passing the image to the pose estimation model. This can help reduce false positives during pose estimation and aid tracking subjects that are relatively small in the cameras field of view. This process increases the quality of the output data at the cost of increased processing time. It can be turned on and off in "Process Data" tab with the checkbox "Use YOLO Crop Method". It is turned off by default.

![Detail of YOLO Crop Checkbox](YOLO_crop_detail.png)
![Detail of YOLO Crop Checkbox](YOLO_crop_detail.png){ width="450" }

## How it Works
The pose estimation models FreeMoCap uses are all set up to process a certain size of image. While you can pass any size of video into FreeMoCap, the images get downsized to a set image size (256x256 pixels for MediaPipe) before going into the pose estimation model. This can have a negative effect on the model's processing, especially if the subject doesn't take up most of the image.
Expand Down

0 comments on commit 403862c

Please sign in to comment.