Help needed on making 3D Augmented Reality conference app using video stream in Jitsi video conference app
https://github.com/jitsi/jitsi-meet/issues/5269
https://github.com/jitsi/jitsi-meet/issues/5269#issuecomment-621729124
Please refer to the link above.
I have started a project on 3D VR AR Conferencing recently:
https://github.com/udexon/Phoom
Excuse the name for I am sure you notice the similarity to a famous app.
I believe the problem that we need to solve is quite simply the following:
i. Separate foreground (human face or body) image from background (static)
ii. Since most video encoder will do this in P-frame or B-frame or related algorithms, what we need to do is to identify the code in Jitsi that does this.
iii. At the display / rendering end, render the extracted foreground image as avatar in a 3D view.
iv. In my repo, I have included links to OpenCV WASM implementation, which I think MAY help.
Collaborators are welcome.
What I need help from you guys is this:
- find the code in Jitsi that does the foreground / background / P-frame / B-frame separation.
Thank you very much.
[link] [comments]