You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
First create a conda environment using the provided environment.yml file (use `environment_ubuntu.yml` or `environment_macos.yml` depending on the operating system you're using):
26
+
```
27
+
conda env create -f environment_ubuntu.yml
28
+
```
29
+
30
+
Then activate the environment using:
31
+
```
32
+
conda activate rlds_env
33
+
cd vla-scripts/extern/ego4d_rlds_dataset_builder/ego4d
34
+
pip install -e .
35
+
```
36
+
37
+
Then, download all necessary dependencies form [huggingface](https://huggingface.co/datasets/qwbu/univla-ego4d-rlds-dependencies) and put them under ```vla-scripts/extern/ego4d_rlds_dataset_builder```.
38
+
39
+
40
+
#### :two: We first extract the interaction frames (video clips within ```pre_frame``` and ```post_frame```) with a FPS of 2 and save them as ```.npy``` files.
41
+
42
+
We first process the citical information about the interaction clips and key frames (```pre_frame```, ```pnr_frame```, and ```post_frame```) into a json file (```info_clips.json```) with [this script](https://github.com/OpenDriveLab/MPI/blob/79798d0d6c40919adcf3263c6df7e86758fdd59a/prepare_dataset.py), or you can directly download the json file from [here](https://huggingface.co/datasets/qwbu/univla-ego4d-rlds-dependencies).
43
+
44
+
```bash
45
+
python preprocess_ego4d.py \
46
+
--denseclips_dir /path/to/output/denseclips \ # output dir for processed clips
47
+
--info_clips_json /path/to/info_clips.json \ # metadata of keyframes
The default save path for the dataset is `/root/tensorflow_datasets/ego4d_dataset`. Directly process the whole dataset may face memory limit issue, we can split the dataset into several parts and process them separately:
74
+
75
+
```bash
76
+
cd vla-scripts/extern/ego4d_rlds_dataset_builder/ego4d
0 commit comments