Raytracing in ED for everyone!

Since a lot of people use Elite as a screen shot generator, I would think a good start would be using Nvidia Ansel to do RT screenshots. Much less effort on FDev's part to support than rather than full-on ray tracing.
 
So here's a question for you graphic gurus - the ray tracing I'm familiar with (generating photorealistic scenes from software like RenderMan) requires the entire scene to be used, not just what's put on the screen. An example of this would be when we fly towards the sun at a certain angle, we should see our own reflections in our cockpit glass as the bright sun reflects off our holomes. This is why it can take hours if not days (depending on your hardware) to render a single frame. I have to think that the video game version of raytracing is "raytracing lite", because this seems like a tremendous burden to handle in real time.

In a game like Elite, you could probably get away with multi-pass selective ray-tracing, where things inside the cockpit are illuminated by local stars and perhaps even close planetshine, but not every light coming from inside a station... I'm just really curious now what video game raytracing is and how it's done compared to "real" raytracing (the latter sometimes taking shortcuts when LOTS of light sources are in play).
 
The very simple answer is: Ray tracing reduces detail by limiting the number of times you let the ray reflect off objects, and limiting the number of light sources you need to calculate for the scene.

So for example: "The ultra detailed I don't care how long it takes to render" option will take into account all the stars in the sky, all the cockpit lights, all the objects, and just keep letting that ray bounce around until it stops. The "I want this to render in less than 20ms" option would throw out all light sources below a certain intensity, and only let the ray bounce once.
 
The very simple answer is: Ray tracing reduces detail by limiting the number of times you let the ray reflect off objects, and limiting the number of light sources you need to calculate for the scene.

So for example: "The ultra detailed I don't care how long it takes to render" option will take into account all the stars in the sky, all the cockpit lights, all the objects, and just keep letting that ray bounce around until it stops. The "I want this to render in less than 20ms" option would throw out all light sources below a certain intensity, and only let the ray bounce once.
But you still need to process off-screen geometry, right? In order to see my reflection in the glass, the game has to generate my holome for the rays to bounce off of.

Speaking of, wouldn't you ideally want at least two bounces? Otherwise you won't get reflections or scatter lighting, in which case there's very little reason to even use ray-tracing over traditional methods. Light needs to hit me, then hit the glass (and all other surfaces), then hit the simulated camera.

Funny, I just looked up at my window and see reflections in it. Now I'm looking toward the stainless steel office refrigerator and I'm seeing the window-shaped blurried scatter from the sunlight hitting the wall beside it. Both are double-bounce effects. I can't remember what the default bounce count was for my rendering software.... Anyway, two bounces seems to be the minimum to get any true value out of the system.
 
But you still need to process off-screen geometry, right? In order to see my reflection in the glass, the game has to generate my holome for the rays to bounce off of.
Depends on what you mean be "Process". Does the game render your holo-me? No. The ray bounces off the glass and stops at the first opaque object (The holo-me), then calcuates the color of that pixel of that point by computing the material of the object and the light sources striking it. Of course the game will be doing some processing to do the collision

Speaking of, wouldn't you ideally want at least two bounces? Otherwise you won't get reflections or scatter lighting, in which case there's very little reason to even use ray-tracing over traditional methods. Light needs to hit me, then hit the glass (and all other surfaces), then hit the simulated camera.
Depends on how you want to do the counting. Each ray is a single pixel on your computer screen. The ray starts at the camera (aka, your monitor), travels to the glass of your ship canopy, partially passes through the glass refracting into empty space, and also bouncing off the glass and striking your holo-me. The engine then combines the results of both of those rays, the opacity of the glass, and the light sources striking the glass to determine the color of that pixel. I count that as one bounce.

A second bounce would be reflecting off the glass of your pilots visor, with the ray splitting again for refraction and reflection.

Again, this is the very simplified version of it but you should get the idea. The more light sources and bounces you allow, the greater the realism in the photo.
 
Depends on what you mean be "Process". Does the game render your holo-me? No. The ray bounces off the glass and stops at the first opaque object (The holo-me), then calcuates the color of that pixel of that point by computing the material of the object and the light sources striking it. Of course the game will be doing some processing to do the collision
Exactly, but if we're talking hardware doing the ray-tracing, then this is additional geometry and texture data (though the later may already be in video memory) being sent to the video card for processing. Basically the video card has to "process" the entire cockpit rather than just what is being displayed on the screen, and in a sense it is rendering it if the reflection of my cockpit / bridge can be seen in the glass. In a sense...

Depends on how you want to do the counting....
A second bounce would be reflecting off the glass of your pilots visor, with the ray splitting again for refraction and reflection.
Yeah I tend to forget about the whole "everything is reversed" in ray-tracing (as a scene designer I'm thinking of actual light paths vs render paths). As long as there are enough bounces to allow a single reflection / scatter illumination, I think that would cover most games like Elite.

As for light sources, my experience is with the lowly PS4, where only the main star cast shadows. There are no shadows inside a station. Is this different on PC? It would be cool to see the multiple shadows cast by my ship illuminated on the landing pad. I suspect real-time ray-tracing could handle this if it limited it to my pad.... Stations have an insane number of light sources all over the place, so I suppose the distant lights could all be averaged out as ambient...

Fascinating topic!
 
Elite would be THE game for ray tracing.

When you are on a planet surface for example, and see half of the star above the horizon, yet the ground is not lit yet is the prime example, but otherwise this game sorely needs multiple light sources and tidbits like shadows of the ring systems.
 
Elite would be THE game for ray tracing.

When you are on a planet surface for example, and see half of the star above the horizon, yet the ground is not lit yet is the prime example, but otherwise this game sorely needs multiple light sources and tidbits like shadows of the ring systems.
A raytraced Elite would be EPYC.
 
So here's a question for you graphic gurus - the ray tracing I'm familiar with (generating photorealistic scenes from software like RenderMan) requires the entire scene to be used, not just what's put on the screen. An example of this would be when we fly towards the sun at a certain angle, we should see our own reflections in our cockpit glass as the bright sun reflects off our holomes. This is why it can take hours if not days (depending on your hardware) to render a single frame. I have to think that the video game version of raytracing is "raytracing lite", because this seems like a tremendous burden to handle in real time.

In a game like Elite, you could probably get away with multi-pass selective ray-tracing, where things inside the cockpit are illuminated by local stars and perhaps even close planetshine, but not every light coming from inside a station... I'm just really curious now what video game raytracing is and how it's done compared to "real" raytracing (the latter sometimes taking shortcuts when LOTS of light sources are in play).
In short it´s all about BVH.
You can read more about it here in section D:

But in short the RT cores do the BVH, the Tensor cores do the denoise(and DLSS if applied). This is why RTX cards in raytracing is many times faster than any other card.



Btw for those wondering how fast RTX cards are vs non RTX cards in rays only. It's 6-10 billion rays for RTX cards while Pascal/non RTX turing/Vega/Navi is in the 200-500 million rays area. No denoise, no DLSS. This is also why AMD haven´t added DXR support. It would only fuel RTX sales as the DXR on GTX cards did.

From the sales of RTX cards I could imagine we would start to see games 2-3 years from now that can´t run without RT/Tensor cores(In any practical way). It´s so much easier from the developer side and the result is so much better.
 
Last edited:
no way will that happen... not in 2-3 years time. there will be a non ray tracing lighting option for the forseeable future imo. my fear is it will be like physx, a fantastic thing which never takes off outside of a few fantastic examples because it is limited to certain gpus.

i HOPE this is not the case and what we see is that in 2 - 3 years time raytracing is ubiquitous..... but it will be as an option and not mandatory, i would bet money on that.

(2-3 years will also be perfect timing for me to upgrade my 1080ti to a gpu which is capable of it without bringing it too its knees :) )

remember nvidia are still bringing out brand new GTX cards and they are fairly high level, and AMD are as well.... these will not be forced into obsolescence in such a short time.
 
no way will that happen... not in 2-3 years time. there will be a non ray tracing lighting option for the forseeable future imo. my fear is it will be like physx, a fantastic thing which never takes off outside of a few fantastic examples because it is limited to certain gpus.

i HOPE this is not the case and what we see is that in 2 - 3 years time raytracing is ubiquitous..... but it will be as an option and not mandatory, i would bet money on that.

(2-3 years will also be perfect timing for me to upgrade my 1080ti to a gpu which is capable of it without bringing it too its knees :) )

remember nvidia are still bringing out brand new GTX cards and they are fairly high level, and AMD are as well.... these will not be forced into obsolescence in such a short time.
It´s all about volume. 2-3 years from now the next generation is already rolling over with even more RTX SKUs. And 2060 may be the best selling card if not 1660ti of this generation. RTX sales is also currently accelerating. We know Intel is very likely to support Tensor cores and the question is if they are able to support RT cores in any way as well. AMD at this point must be considered gone just as the stillborn Radeon 7 with its limited production run. Intel will replace AMD in the bottom in 2020 and after that any AMD presence will be symbolic at best.

If we pretend for a moment that 2060 will sell as much as 1060 did and the same for higher SKUs. Then you can roughly mobilize 25% of all the 250M? steam users for a raytraced game alone and that's before a new card launches and/or if any Intel cards have full feature support.
 
Last edited:
I think you massively under estimate the relevance of the console, and AMD have 2 of the big 3 consoles wrapped up.
AMD ARE supporting ray tracing as well however navi onwards of course. the 1660ti IS going to be a good seller and that is exactly why i think your ray tracing only games is not going to happen.
1660ti is not RTX capable.
Finally... dont judge the Radeon VII imo....... it is not and was never meant to be a major production run, it has come from failed chips from vega20 (MI50 iirc), better that than bin them!. Even then tho, that are not unimpressive cards and are better in some areas than the RTX2080 (just not gaming)

but hey its all guesswork, we should bookmark and revisit in later 2021 / early 2022 :)
 
Last edited:
I think you massively under estimate the power of the console, and AMD have 2 of the big 3 consoles wrapped up.
AMD ARE supporting ray tracing as well however navi onwards of course. the 1660ti IS going to be a good seller and that is exactly why i think your ray tracing only games is not going to happen.
1660ti is not RTX capable.

but hey its all guesswork, we should bookmark and revisit in later 2021 / early 2022 :)
The PS5 will support raytracing as much as any older card. It doesn't have RT cores or Tensor cores. It's a gimmick checkbox. Just as it´s 8K support (or even 4K).

Let´s not try and fool anyone. The only cards to have any meaningful raytracing ability will be RTX cards and any possible Intel cards with the same featureset. No other cards, period. The Noir demo was already a disaster show of a discount attempt and more an attempt from Crytek to say "hey look we ain't dead(yet)". Also they haven't released the demo as far as I know for any 3rd party to run either.

Consoles don't really have much power outside a few games. And when the PS5 release around Q4 2020(Production Q3 2020), it´s GPU will be quite anemic too. It´s already in the lower end of todays landscape. By release you may be able to buy 150$ dGPUs with better metrics. The only good thing about the PS5 is the abandoning of the Jaguar CPU that have plagued current consoles way too long.

And let´s see what Microsoft does. Cloud gaming is going to be the new console. Even Google jumped on that wagon now tho they already choose to try and fail as much as they could from the start. Linux, Vulkan etc.(Think Steammachine).

No mobility for consoles is going to hurt them a lot. Gamers outside dinosaours like myself prefers gaming on the go. Also why laptop sales now account for something like 75% and gaming laptops still have large double digit growth. Specially when we enter the 5G era and the death of all land based lines.

In relation to the topic, I wonder what level of work is required for Frontier to add DLSS support to their COBRA engine. It should be relatively easy and then let Nvidia(Intel?) do the training. People with tensor core based cards could enjoy big FPS boosts and better quality too.
 
Last edited:
You probably know more than me on this but, has navis ray tracing ability been confirmed yet? I thought Navi was a whole new architecture so isn't it possible it will be able to make a decent shout at ray tracing? (Like I say maybe I have missed some info regarding Navi)
 
You probably know more than me on this but, has navis ray tracing ability been confirmed yet? I thought Navi was a whole new architecture so isn't it possible it will be able to make a decent shout at ray tracing? (Like I say maybe I have missed some info regarding Navi)
Navi is not a new architecture. It's the same GCN as usual. The first new architecture rumoured from AMD is post Navi and called Arcturus. However the RTG division in AMD have close to no R&D and all the R&D money is spend on more or less direct shrinks with huge IC and mask cost due to the mistake of going 7nm that drives up cost massively with little to no benefit.

For some random guy in the forum, you sure do [pretend to] know alot about a product that hasn't even officially been announced yet. I think I'll wait for Sony's keynote and press releases.
When was the last time a console could live up to it´s paper specs? 2Tflops PS3? 4K PS4? Even 1080P60FPS was hard to reach. Also the supplying company doesn´t have the technology or the R&D currently. If the PS5 was a 2023+ release and AMD started to ramp RTGs R&D budget today with dedicating the entire company R&D to it. Then sure it would be possible.

There is a reason why anyone with any abilities have fleed RTG and run to Apple, Nvidia, Intel, Qualcomm and so on. You can´t work with no tools.
 
Last edited:
due to the mistake of going 7nm that drives up cost massively with little to no benefit.
Lower power requirements allowing for wider applications (such as laptops, custom APU designs etc) is quite a benefit, just not one in a niche feature in expensive GPUs. AMD is going for the integrated and low / mid tier market where most money is made. There is a reason why nV brought out the 1160.
 
Lower power requirements allowing for wider applications (such as laptops, custom APU designs etc) is quite a benefit, just not one in a niche feature in expensive GPUs. AMD is going for the integrated and low / mid tier market where most money is made. There is a reason why nV brought out the 1160.
Vega 20 managed 73Mhz before it was further reduced and twice the HBM2 stacks from going from 14nm to 7nm while still being a 300W toaster. Plus it's cost went up significantly so even at 700$ it was sold with 100-200$ loss.. The biggest problem with lower nodes is that they cost more transistor for transistor after 14nm. So even plain shrinks increase cost.

The 1660 didn´t have anything to do with AMD but to make Nvidia users upgrade. It´s not a static marked you know.

Not that node names have any relevance anymore, it´s pure PR renamed whenever needed. TSMC 20nm node for example managed to become a 12nm node. The last time nodes had any meaningful meaning was 20 years ago. Today you have to look at the actual density and electrical properties. Else its just Samsung bananas vs TSMC mangos vs Intel pineapples. TSMC also just this week renamed a 7nm node to 6nm because...Samsung had PR slided a 5nm node and TSMC had already used that number ;)

Sony is also slowly preparing people for that the PS5 will cost quite more than PS4.
 
Last edited:
Top Bottom