Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

1.1 prep #47

Closed
wants to merge 4 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -5,3 +5,4 @@ env

utilities/output/*
.idea/*
docs/assets/.DS_Store
Binary file added docs/assets/images/YOLO_crop_detail.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
2 changes: 1 addition & 1 deletion docs/getting_started/installation.md
Original file line number Diff line number Diff line change
Expand Up @@ -117,4 +117,4 @@ Now that you've got FreeMoCap installed, you're ready to record your first motio

The first step is to set up your environment for motion capture and connect your cameras

:point_right: [Set up your environment](/getting_started/your_first_recording/)
:point_right: [Set up your environment](./your_first_recording.md)
9 changes: 6 additions & 3 deletions docs/getting_started/multi_camera_calibration.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,3 @@
## Multi-Camera Calibration Guide
!!! tip
[Check out this video for more information and directed guidance in the calibration process](https://youtu.be/GxKmyKdnTy0?t=1615)

Expand All @@ -11,6 +10,9 @@ For smaller spaces, a simple printout from a standard printer should work just f

For larger spaces, you might need to print this on a larger poster board so that it can be seen well by the cameras.

### Setting up Cameras
To get a multiple camera recording, you'll need multiple cameras set up and connected to your computer. There's detailed instructions on multiple camera setups in the [Detailed Setup Guide](../resources/detailed_setup.md), but for now it will suffice to have two or more (three or more is best) cameras connected directly to your camera. We don't recommend using a USB hub to connect cameras. The cameras should be set up so they all see the subject at the same time, and have a 40-60 degree angle between each camera from the subject's viewpoint.

### Recording Calibration Videos
In the camera view section of the GUI, select the calibration videos option:
![image](/assets/images/freemocap_calibration_window_w_text_overlay.png)
Expand All @@ -23,14 +25,15 @@ For more information about how to use the board to get a high quality calibratio
Once you have given each camera a good view of the board shared with another camera, click "Stop Recording," and it will begin the calibration process automatically.

!!! tip
Be sure to keep an eye on the terminal that launched the GUI for helpful output, as the log at the bottom of the GUI screen does not capture all of the outputs yet.
Be sure to keep an eye on the terminal that launched the GUI for more helpful output, as the log at the bottom of the GUI screen does not display every detail.


### Recording Motion Capture Videos

Once you have completed the calibration process, you are are ready to record motion capture videos!

Select "Motion Capture Videos" from the camera view section of the GUI, and then click "Record." Perform your movement, and then click "Stop." The software will automatically process the videos and generate a Blender scene with the output data!
Select "Record Motion Capture Videos" from the camera view section of the GUI, and then click "Record." Perform your movement, and then click "Stop." The software will automatically process the videos and generate a Blender scene with the output data!

To manually process/re-process the videos, use the `Process Motion Capture Videos` button in the `Processing` tab of the GUI.

Now that you've recorded 3D motion capture on your own, you're ready to move on to the [Next Steps](./next_steps.md)
5 changes: 5 additions & 0 deletions docs/getting_started/next_steps.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
If you have followed along with the previous guides, congrats! You should now be able to record your own 3D motion capture. We highly recommend making a lot of short recordings to figure out what works best for you and your use cases. It's also a fun excuse to do a little dance or practice your miming skills.

If you'd like to understand more details about using FreeMoCap, including how to set up a recording space and more information on the various setting, visit our [Resources](../resources/index.md) section for more in depth guides.

If you haven't found success, don't be too discouraged. We have a troubleshooting section with guides on [Installation Troubleshooting](../troubleshooting/installation_troubleshooting.md) and [Calibration Troubleshooting](../troubleshooting/calibration_troubleshooting.md) that will help you figure out common issues. If you continue to have troubles, [join our Discord](https://discord.gg/P2nyraRYjb) and ask for help in the #help-requests channel, or submit an issue on the [FreeMoCap GitHub](https://github.com/freemocap/freemocap/issues).
5 changes: 2 additions & 3 deletions docs/getting_started/single_camera_recording.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,3 @@
## Single-Camera Recording

### Introduction
We recommend that everybody starts by creating a single-camera recording and reconstruction of their movement before moving on to more complex tasks like multi-camera calibration and reconstruction.

Expand Down Expand Up @@ -29,4 +27,5 @@ Because you're doing a single camera recording, you don't need to do any calibra

Click "Record" and go into the field of view to perform some kind of movement. Then click "Stop", and it should process automatically from there. When it's done, it will pop up a Blender scene if Blender was properly detected and populate the folder with the output files!


### 3D Data with Multiple Cameras
Now that you have gotten the process working with a single camera, it's time to try multiple cameras. You can start with our [Multi-Camera Calibration Guide](./multi_camera_calibration.md).
46 changes: 46 additions & 0 deletions docs/resources/detailed_setup.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,46 @@
# Detailed Setup Guide

While FreeMoCap can record anywhere that's big enough to capture your entire body on video, some places and set ups will give better results than others. The tips below will help you configure your space to get the best recordings, whether you're working with high-end gear or simple webcams.

Don't be afraid to start small and simple - any set up is better than no set up. You can always add complexity as you become more comfortable with the system.

## [Video Tutorial](https://www.youtube.com/watch?v=GxKmyKdnTy0&t=872s)

This video uses an older version of the software, but the discussion of hardware, lighting, and camera placement is still relevant (use the timestamps to jump to specific sections).

## Lighting Conditions

Lighting is crucial for a camera-based system like FreeMoCap. For best results, use bright environments, such as near open windows during the day. Be cautious, as environments that appear bright to our eyes may be quite dim (Human eyes are exceptionally good at adapting to different lighting conditions).

!!! tip-full-width "Exposure Settings"
When setting the exposure on the GUI, aim for a setting of at most `-6`, but better results can be achieved at `-7` or `-8` or lower.

On a technical level, **we want enough light in our scene that we can set our camera's Exposure times to be as short as possible**.

A short exposure time means that the camera's (digital) shutter will be open for a shorter time on each recorded frame. That means that your subject will have less time to move during the exposure, which will reduce motion blur and make it easier to identify the subject's joints in each frame.

??? info "More on Exposure Settings"
For information on how these camera exposure settings map onto actual time measurements, see [this article](https://www.kurokesu.com/main/2020/05/22/uvc-camera-exposure-timing-in-opencv/).


## Background Considerations

A solid-colored backdrop is not strictly necessary, but using one can improve the quality of recordings. Covering complex visual textures in the camera's field of view makes tracking the human in the scene easier, leading to better outcomes. Start with a simple setup first, and add complexity once you've established that the simple version works.

!!! tip-full-width "Make Your Motion Capure Stand Out"
Having visual contrast between your subject and the background will help your subject stand out in recordings.

Similarly, it is better for subjects to wear tight-fitting clothes that expose joints like elbows and knees. While not strictly necessary, it will lead to better quality recordings. Consider the machine's perspective: if the subject is wearing baggy pants and the software is asked to identify the knee, it will struggle. It may produce an estimate close to the correct position, but tracking the knee will be easier if it is visible or if the subject is wearing tight-fitting pants.

## Camera Placement and Configuration

Camera placement is critical. Ensure that the subject is in full view of each camera. In multi-camera setups, ensure that at least two, preferably three, cameras can see every part of the subject's body at any given time.

!!! tip-full-width "Working In A Tight Space"
Cameras can be rotated 90° to portrait orientation to capture a standing human in a tighter space. You can even mix portrait and landscape views together to better capture your space.

Ideally, the person should occupy as much of the camera screen as possible, providing more information for the software to track the person. Most testing has been done on low-quality webcams, although successful recordings have been made with GoPros, DSLRs, and mobile phones.

In multi-camera situations, separate the cameras enough to provide different points of view on the subject. If multiple cameras are positioned too close together, they won't add much information to the scene. Separating them by a sufficient angle improves triangulation. Just make sure each camera can share a view of the Charuco board with another camera during calibration.

High-quality recordings can be ensured by adjusting camera settings for optimal image results, mainly by setting the appropriate exposure for the lighting conditions and having good lighting. A successful calibration is also necessary, which you can read about in our [Multi-Camera Calibration Tutorial](multi_camera_calibration.md).
11 changes: 11 additions & 0 deletions docs/resources/faq.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
# Frequently Asked Questions

## How is FreeMoCap free?

## How can I contribute to FreeMoCap?

## Does FreeMoCap work in realtime?

## Can FreeMoCap track multiple people at once?

## Can I track things that are not humans?
44 changes: 6 additions & 38 deletions docs/resources/index.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,3 @@
While FreeMoCap can record anywhere that's big enough to capture your entire body on video, some places and set ups will give better results than others. The tips below will help you configure your space to get the best recordings, whether you're working with high-end gear or simple webcams.

Don't be afraid to start small and simple - any set up is better than no set up. You can always add complexity as you become more comfortable with the system.


!!! tip-full-width "Grip it & Rip it :sparkles:"
Expand All @@ -13,45 +10,16 @@ Don't be afraid to start small and simple - any set up is better than no set up.

If you get stuck, [join our Discord](https://discord.gg/P2nyraRYjb) and ask for help in the #help-requests channel!

## [Video Tutorial](https://www.youtube.com/watch?v=GxKmyKdnTy0&t=872s)

This video uses an older version of the software, but the discussion of hardware, lighting, and camera placement is still relevant (use the timestamps to jump to specific sections):


# Detailed Setup Guide
## Lighting Conditions

Lighting is crucial for a camera-based system like FreeMoCap. For best results, use bright environments, such as near open windows during the day. Be cautious, as environments that appear bright to our eyes may be quite dim (Human eyes are exceptionally good at adapting to different lighting conditions).

!!! tip-full-width "Exposure Settings"
When setting the exposure on the GUI, aim for a setting of at most `-6`, but better results can be achieved at `-7` or `-8` or lower.

On a technical level, **we want enough light in our scene that we can set our camera's Exposure times to be as short as possible**.

A short exposure time means that the camera's (digital) shutter will be open for a shorter time on each recorded frame. That means that your subject will have less time to move during the exposure, which will reduce motion blur and make it easier to identify the subject's joints in each frame.

??? info "More on Exposure Settings"
For information on how these camera exposure settings map onto actual time measurements, see [this article](https://www.kurokesu.com/main/2020/05/22/uvc-camera-exposure-timing-in-opencv/).


## Background Considerations

A solid-colored backdrop is not strictly necessary, but using one can improve the quality of recordings. Covering complex visual textures in the camera's field of view makes tracking the human in the scene easier, leading to better outcomes. Start with a simple setup first, and add complexity once you've established that the simple version works.

!!! tip-full-width "Make Your Motion Capure Stand Out"
Having visual contrast between your subject and the background will help your subject stand out in recordings.
## Details on setting up a recording space:

Similarly, it is better for subjects to wear tight-fitting clothes that expose joints like elbows and knees. While not strictly necessary, it will lead to better quality recordings. Consider the machine's perspective: if the subject is wearing baggy pants and the software is asked to identify the knee, it will struggle. It may produce an estimate close to the correct position, but tracking the knee will be easier if it is visible or if the subject is wearing tight-fitting pants.
[Detailed Camera Setup Guide](../resources/detailed_setup.md)

## Camera Placement and Configuration
## Details on parameter options:

Camera placement is critical. Ensure that the subject is in full view of each camera. In multi-camera setups, ensure that at least two, preferably three, cameras can see every part of the subject's body at any given time.
[Reprojection Filtering Parameters](../resources/reprojection_filtering.md)

!!! tip-full-width "Working In A Tight Space"
Cameras can be rotated 90° to portrait orientation to capture a standing human in a tighter space. You can even mix portrait and landscape views together to better capture your space.
[YOLO Cropping Parameter](../resources/yolo_cropping.md)

Ideally, the person should occupy as much of the camera screen as possible, providing more information for the software to track the person. Most testing has been done on low-quality webcams, although successful recordings have been made with GoPros, DSLRs, and mobile phones.
[Multiprocessing Parameter](../resources/multiprocessing.md)

In multi-camera situations, separate the cameras enough to provide different points of view on the subject. If multiple cameras are positioned too close together, they won't add much information to the scene. Separating them by a sufficient angle improves triangulation. Just make sure each camera can share a view of the Charuco board with another camera during calibration.

High-quality recordings can be ensured by adjusting camera settings for optimal image results, mainly by setting the appropriate exposure for the lighting conditions and having good lighting. A successful calibration is also necessary, which you can read about in our [Multi-Camera Calibration Tutorial](multi_camera_calibration.md).
4 changes: 4 additions & 0 deletions docs/resources/multiprocessing.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
The multiprocessing option allows you to process multiple videos at the same time, decreasing the total processing time. Each process runs in a separate CPU core. You can set the number of CPU cores to use by changing the "Max Number of Processes to Use" option in the "2d Image Trackers" parameter group. The default is the maximum number of cores on your machine minus one, to account for the process the GUI is running in. In the example below, the machine has 8 cores total, and 7 show up as available. Freemocap will never create more processes than the number of videos in your active recording. If you would like to turn off multiprocessing entirely, you can set the "Max Number of Processes to Use" to one.


![MultiProcessing Parameter Screenshot](../assets/images/multiprocessing_crop_detail.png)
19 changes: 19 additions & 0 deletions docs/resources/reprojection_filtering.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
Reprojection error filtering is an optional postprocessing step to the 3D triangulation stage of processing. It retriangulates outlier data from the 3d triangulation with the cameras contributing the most error removed. Reprojection error filtering is most effective when there is poor skeleton detection in one or more camera views. It is turned on and off in "Process Data" tab with the checkbox "Run Reprojection Error Filtering". It is turned off by default.

![Detail of Reprojection Error Filtering Options](../assets/images/reprojection_filtering_crop_detail.png)

## What is Reprojection Error?
"Reprojection error" is the distance between the originally measured point (i.e. a joint on the 2d skeleton) and the reconstructed 3d point reprojected back onto the original image. The intuition is that if the 3d reconstruction and original 2d track are perfect, then reprojection error will be zero. If it isn't, then there is some inaccuracy in either: the original 2d tracks (i.e. bad skeleton detection from one or more cameras), in the 3d reconstruction (i.e. bad camera calibration), or a combination of the two.

## How Does Reprojection Error Filtering Work?
Reprojection error filtering can help when there is poor skeleton detection in one or more camera views. It starts by finding the outliers in the reprojection error, defined as all points above a certain threshold error value calculated as a percentile of the data. It then retriangulates the data with the camera contributing the most error (for that specific frame and marker) removed. If there are still reprojection errors above the threshold, it repeats the process but also removes the camera that contributes the second most to the error. It continues this process until all reprojection errors are below the threshold, or until it hits the minimum number of cameras left. If it hits the minimum camera limit, it sets the remaining data that still has a reprojection error above the threshold to NaNs, which will then be gap filled in the post processing stage. Once this process is complete, it reinserts the filtered data back into the normal data.

## What are the Parameters and What Do They Do?
The first parameter is "Run Reprojection Error Filtering", a checkbox that sets whether to run the reprojection error filtering process. It is turned off by default.

The second parameter is "Reprojection Error Threshold (%)", which sets the percentile above which reprojection errors will be filtered out. The percentile threshold will be calculated to an absolute error, and then that absolute error will be the comparison for the data. Setting a lower threshold will result in more data being retriangulated, while setting a higher threshold will result in less data being retriangulated. It is set to 90% by default.

The third parameter is "Minimum Number of Cameras Left", which sets the minimum number of cameras to remain in the triangulation process during the reprojection error filtering. If the filtering process is able to remove all outliers before the minimum number of cameras is reached, the reprojection error filtering process will stop. If outliers remain after the minimum number of cameras is reached, the remaining outlier data will be set to NaNs. The minimum possible value is 2, as two camera views are required for 3D triangulation. If the minimum number of cameras is greater than or equal to the total number of cameras in the recording, the reprojection error filtering process will not run. It is set to 3 by default.

## Note about Debug Plots
A debug plot for the reprojection error filtering is saved to the `output_data/raw_data` folder of your recording session. The reprojection error filtering runs on a per camera and per marker level, while the debug plot displays the mean reprojection error for a frame across all body markers and cameras. This can lead to plots that don't show the filtering results very well.
Loading