I had no idea how to create a virtual fisheye camera. Unity only provides basic flat cameras, that follow the pinhole camera model. I'm sharing the things that I learned in this project, for those of you want to create a virtual fisheye camera.
This article contains
I only have experience with Unity, so I will explain based on the Unity environment. However, the basic concept can be also relevant to other virtual environment tools such as Unreal.
The first thing you need to know is that the virtual fisheye camera does not capture the image directly. It is actually a camera that captures a screen (or display) that shows fisheye images, which is quite different from the default Unity cameras. It is because there is no way to directly manipulate a default camera's projection models.
Therefore, we have to create a screen that renders fisheye images (fisheye screen). The screen renders images from cameras in a camera rig. The numbers can vary, but usually, a rig has five cameras. It is because a fisheye camera has to cover all directions. Therefore, we used five cameras to capture images of all directions and render them to the fisheye screen. The five cameras capture five planes around the camera rig.
You can think of this model as a fisheye camera inside a transparent cube that captures five planes of the cubes. Then each camera renders to the fisheye screen. So what we have to do is create meshes that map each point on the cube's plane to the flat plane. Then, if we make cameras in the rig to render its image to the mesh, it creates a fisheye image.
The fisheye screen is separated into five pieces. Each piece corresponds to each camera. The image of each camera is mapped to each image piece. The mapping process can be done by making a texture. In Unity, it is possible to make a screen that shows images from a camera. Therefore, what we need is a mesh with a texture that can map an original image from a camera to a part of a fisheye image. Then, the meshes will compose a fisheye screen.
Illustration of mapping a vertex to the fisheye image.
Here is how I created a mesh for each camera. I created a cube with vertices on its plane and mapped each vertex to the fisheye image plane (fisheye screen). Then the mapped vertices create a new mesh for the fisheye screen.
The above image shows how a vertex on the right (camera's) image plane is mapped to the fisheye image. (i, j) means a vertex in the right image plane, and (u, v) is a texture (UV map) point of the vertex. Using fisheye projection model it is possible to know where does each vertex maps on the fisheye image. We mapped both the vertex and its UV coordinate to the new fisheye image.