Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"Index exceeds matrix dimension" kilosort2 failure + General question on directory/subdirectory setup #3427

Open
TheAnton205 opened this issue Sep 18, 2024 · 3 comments
Labels
sorters Related to sorters module

Comments

@TheAnton205
Copy link

Trying to run kilosort2 on jupyter notebook setup for spike interface version 0.99.1, but getting an "index exceeds matrix dimension error" from my kilosort.

Running the same dataset (openephys .dat) via the same kilosort GUI, nothing fails.

This is the code running the kilosort:

recording_f = si.bandpass_filter(fullRecording_WT_LFP, freq_min=300, freq_max=1249) #freqs for LFP
recording_cmr = si.common_reference(fullRecording_WT_LFP, reference='global', operator='median')

fs = fullRecording_WT_LFP.get_sampling_frequency()
num_frames = fullRecording_WT_LFP.get_num_frames()
channels = fullRecording_WT_LFP.get_num_channels()
print(channels)
print(num_frames)
print(fs)

# recordingSeconds = recording_cmr.frame_slice(start_frame=0*fs, end_frame=100*fs)
recordingSeconds = fullRecording_WT_LFP.frame_slice(start_frame=0*fs, end_frame=100*fs)
# recordingSeconds = fullRecording_WT_LFP


ss.Kilosort2Sorter.set_kilosort2_path(r"H:\transfer_raster\Kilosort2-master")
secondsSorted = si.run_sorter('kilosort2', recordingSeconds,remove_existing_folder=True,output_folder=r'H:\results', verbose=True)

Looking online I see that this could be due to kilosort not reading the channels, but asking spikeinterface to print num_channels() proves that spikeinterface is giving kilosort the correct data set.

However, this could also be due to the way that I am extracting my openephys data from the folder. Does anyone have a method to simply choose individual continuous.dat files without having to go through this trouble.

My standard directory tree looks like this:
2023 --> Record Node 101 --> (experiment 1 | experiment 2) --> (recording1 | recording2 | recording3) --> (continuous | events) --> (Neuropix-PXI-100.ProbeA-AP | Neuropix-PXI-100.ProbeA-LFP) --> continous.dat

Here is how I am doing it with spikeinterface:

#extract the recording we need
recordingExtractsWT_LFP = se.read_openephys(recording_file,block_index=0,stream_name="Record Node 101#Neuropix-PXI-100.ProbeA-LFP")
recordingExtractsHOM_LFP = se.read_openephys(recording_file,block_index=1,stream_name="Record Node 101#Neuropix-PXI-100.ProbeA-LFP")

#action potentials
recordingExtractsWT_AP = se.read_openephys(recording_file,block_index=0,stream_name="Record Node 101#Neuropix-PXI-100.ProbeA-AP")
recordingExtractsHOM_AP = se.read_openephys(recording_file,block_index=1,stream_name="Record Node 101#Neuropix-PXI-100.ProbeA-AP")

#WT
wtBaseline_LFP = si.select_segment_recording(recordingExtractsWT_LFP, segment_indices=0)
wtPTZ_LFP = si.select_segment_recording(recordingExtractsWT_LFP, segment_indices=1)
wtDeadPTZ_LFP = si.select_segment_recording(recordingExtractsWT_LFP, segment_indices=2)

wtBaseline_AP = si.select_segment_recording(recordingExtractsWT_AP, segment_indices=0)
wtPTZ_AP = si.select_segment_recording(recordingExtractsWT_AP, segment_indices=1)
wtDeadPTZ_AP = si.select_segment_recording(recordingExtractsWT_AP, segment_indices=2)

fullRecording_WT_LFP = si.concatenate_recordings([wtBaseline_LFP,wtPTZ_LFP])
fullRecording_WT_LFP = wtBaseline_LFP
fullRecording_WT_AP = si.concatenate_recordings([wtBaseline_AP,wtPTZ_AP])

#HOM
HOMBaseline_LFP = si.select_segment_recording(recordingExtractsHOM_LFP, segment_indices=0)
HOMPTZ_LFP = si.select_segment_recording(recordingExtractsHOM_LFP, segment_indices=1)
HOMPTZ2_LFP = si.select_segment_recording(recordingExtractsHOM_LFP, segment_indices=2)

HOMBaseline_AP = si.select_segment_recording(recordingExtractsHOM_AP, segment_indices=0)
HOMPTZ_AP = si.select_segment_recording(recordingExtractsHOM_AP, segment_indices=1)
#HOMPTZ2_AP = si.select_segment_recording(recordingExtractsHOM_LFP, segment_indices=2)

fullRecording_HOM_LFP = si.concatenate_recordings([wtBaseline_LFP,wtPTZ_LFP,wtDeadPTZ_LFP,HOMBaseline_LFP,HOMPTZ_LFP])
fullRecording_HOM_AP = si.concatenate_recordings([wtBaseline_AP,wtPTZ_AP,wtDeadPTZ_AP,HOMBaseline_AP,HOMPTZ_AP])

any thoughts would be great. thank you!

@h-mayorquin h-mayorquin added the sorters Related to sorters module label Sep 18, 2024
@alejoe91
Copy link
Member

@TheAnton205 from your codee it seems you're spike sorting the LFP stream? You should use the AP stream!

Can you give it a try?

@anthony-sharonov
Copy link

I'm sorry but I don't understand. I want to sort the LFP stream, as in the data file that corresponds to our LFPs. In the kilosort2 GUI I can simply use the continuous.dat file for the LFP stream and sort the spikes for that. Maybe I am misunderstanding the way spikeinterface handles the call to kilosort?

@alejoe91
Copy link
Member

The LFP stream is not supposed to contain any spike, because they are filtered out by the low pass filter. Is there any specific reason why you want to spike sort it? I would expect any spike sorter to fail or misbehave, since there should be no apparent spikes.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
sorters Related to sorters module
Projects
None yet
Development

No branches or pull requests

4 participants