I was recently seeing a teams meeting where the attendees were put into a virtual theater of seats and it got me thinking:
Could the Gizmoplex do something similar?
The the technology looks like it is there for this, the software cut out their faces and shoulders and put them in a row of seats so you could see everyone’s face for a party. What if the host could designate 3 people to have seats in the shadowy front row over on the left?
That idea seems really cool to me, though I’m not sure exactly how much coding would go into it. I know the chairs are different size depending on the season so that would have to be figured out. Otherwise, it seems the software cutout just needs mirrored and turned into a black shadow.
It seems like serendipity that Dr Forester made the SOL with extra seating all those years ago.
I think that would be awesome if you could set up group watches in the Gizmoplex and have attendees appear as silhouettes below the screen. Most conferencing software already supports an AI-type of greenscreening that guesses where the human outline is. Easy enough to render this in all black and pop it into the video.
That’s what I was thinking. Then just overlay it on the video. Just have to limit the number of people up front for bigger groups.
Fun fact, I am a software developer, and I currently work on a video conferencing project that utilizes object detection for background blurring, among other things. While from a machine learning perspective what you suggest is pretty easy, where the problem will come in is with the Gizmoplex living as a web application. In that case, all of the heavy lifting for the object detection has to be done in the browser, and that can lead to some pretty pitiful performance, even with a machine with high end broadband connection. Google is making strides in that area with the Chrome browser, and Safari has come a long way as well, but not all browsers are equal, and from my experience none of them are quite up to doing a great job at this kind of thing.
Thanks for the insightful reply. I’m going to take this as, possible but not practical because of performance, but one day!
I mean if an umbilical cord connecting a satellite in space to earth is possible, this software should be around the corner.
Yes, but having people upload or choose a silhouette image would be pretty easy.
A Gizmoplex avatar would solve the issue. Just use the avatars. Maybe even make all the avatars silhouettes.
I think one issue in either case would be that the silhouettes are not typically static. If you want them to move around like the person they represent during the movie, you are going to have to use object detection.
Posture Pals has done the work for us. I claim kinda slouched!
Yes. I think the best implementation would be dynamic silhouettes. Again, the nuts and bolts are beyond me. I am just a dreamer.
We’ve definitely had this concept on our minds for a while, and have plenty of concept sketches to prove it… the problem (for now, among others) is that offering customized audio and video chat can be very expensive to integrate and maintain, so we’ve been looking for existing third-party solutions (like Scener) that would help us enable more social interaction without creating an operating budget that would force us to shut them down. We’ve been trying to develop the Gizmoplex so that we have the best possible chance of keeping it up and running, with most or all of its features, regardless of whether it becomes profitable enough to support more seasons.
But believe me, we’d love to eventually see something like this too, and we’re keeping an eye on the possible ways we could make it happen.
When we tried similar things with object detection and background replacement in a pure web app, we initially went with Google’s TensorFlow, which worked fabulously on the Mac Pros we use for development. Once we started testing on a wide variety of machines, however, we found that on older PCs, the performance was abysmal. We ended up scaling back our expectations, and went with OpenCV, which is much more lightweight, for a simple background blur.
I would imagine something like Scener is not just a pure web app, and may be able to offload some of the object detection to backend servers. That adds expense, however, as those servers will typically need costly high-end GPUs (thanks, Bitcoin!), and would need to be scaled properly to handle the anticipated load. If cost were no object, it would not be too much of a problem, but is definitely more difficult to implement that a purely frontend solution. I think both of those things are going to be a problem for the Gizmoplex, at least for the short term. As technology progresses and the Gizmoplex becomes the premiere destination for the movie riffing zeitgeist, however, I am sure all of these issues will be overcome.
Couldn’t you just ask folks who want the silhouette option to buy a green bed sheet and hang it behind them and use cheap chromakey tech rather than trickier object detection?
Alternatively, you could even sell a “magic” MST3K-themed fiducial marker poster to hang behind people that can make segmenting and scaling easier.
The computer generated silhouette is a good idea for a number of reasons: it’s easy to overlay, it ensures consistency, and it stops people from doing undesirable things via their source image.
One approach that could work is the way the Xbox used to do “lobbies” for multiplayer games. (For all I know they still do it this way.) Your avatar would appear in a room with the others, and there was a menu of pre-programmed movements you could have them do. Basic stuff like nodding and waving was hotkeyed. You could easily do the same for the pre-genned silhouettes, plus have certain actions programmed to occur when the user talks. Otherwise the silhouettes would go into a kind of resting state, maybe randomly shifting in their seats every now and again.
You would create your custom silhouette similar to how you create your avatar on a game console, only it would be a lot easier. All Alternaversal would need to come up with is a range of body sizes and shapes, and then provide add-ons like hairstyles and props (hats, big foam hands, etc). Or of course you could opt for a bot silhouette, also customisable. They could even offer a range of movie characters silhouettes (Robot Monster, Trumpy, Gamera, etc).
There’s a VR driven “watch together” feature in Plex that, with a little tinkering to provide a cool SoL space for viewing, would be sorta neat. The avatars are pretty generic but, otherwise, something not far off from what’s being pitched here could happen. Can even throw popcorn at the screen.
However… there are too many barriers for entry. Plex accounts are free so no issues there, but it only works with someone’s local media (not any of the public libraries, which is a shame since Shout! Factory TV is available), viewers all have to be enrolled on the host’s server/libraries (which an admin may not enjoy the idea of constantly pouring in and pulling out complete randoms), and not everyone has access to VR gear. Those who do have the gear might not have supported hardware (oddly, Plex only support Daydream and stuff with Oculus-now-Meta branding).
Pretty sure that, if enough interest was shown and some partnerships happened, most of the obstacles could be addressed. But they’d have to support an ability for non-VR guests to also participate. Or, rather, I’d prefer if it was more inclusive and easier for everyone to participate.
I read this and I thought “Oh, there should be auditorium seating so that your perspective on the screen changes, with a couple of seats on each side 60% blocked by a support pillar…”
And those come with an obstructed-view discount. Pretty nice!