But you still need to process off-screen geometry, right? In order to see my reflection in the glass, the game has to generate my holome for the rays to bounce off of.The very simple answer is: Ray tracing reduces detail by limiting the number of times you let the ray reflect off objects, and limiting the number of light sources you need to calculate for the scene.
So for example: "The ultra detailed I don't care how long it takes to render" option will take into account all the stars in the sky, all the cockpit lights, all the objects, and just keep letting that ray bounce around until it stops. The "I want this to render in less than 20ms" option would throw out all light sources below a certain intensity, and only let the ray bounce once.
Depends on what you mean be "Process". Does the game render your holo-me? No. The ray bounces off the glass and stops at the first opaque object (The holo-me), then calcuates the color of that pixel of that point by computing the material of the object and the light sources striking it. Of course the game will be doing some processing to do the collisionBut you still need to process off-screen geometry, right? In order to see my reflection in the glass, the game has to generate my holome for the rays to bounce off of.
Depends on how you want to do the counting. Each ray is a single pixel on your computer screen. The ray starts at the camera (aka, your monitor), travels to the glass of your ship canopy, partially passes through the glass refracting into empty space, and also bouncing off the glass and striking your holo-me. The engine then combines the results of both of those rays, the opacity of the glass, and the light sources striking the glass to determine the color of that pixel. I count that as one bounce.Speaking of, wouldn't you ideally want at least two bounces? Otherwise you won't get reflections or scatter lighting, in which case there's very little reason to even use ray-tracing over traditional methods. Light needs to hit me, then hit the glass (and all other surfaces), then hit the simulated camera.
Exactly, but if we're talking hardware doing the ray-tracing, then this is additional geometry and texture data (though the later may already be in video memory) being sent to the video card for processing. Basically the video card has to "process" the entire cockpit rather than just what is being displayed on the screen, and in a sense it is rendering it if the reflection of my cockpit / bridge can be seen in the glass. In a sense...Depends on what you mean be "Process". Does the game render your holo-me? No. The ray bounces off the glass and stops at the first opaque object (The holo-me), then calcuates the color of that pixel of that point by computing the material of the object and the light sources striking it. Of course the game will be doing some processing to do the collision
Yeah I tend to forget about the whole "everything is reversed" in ray-tracing (as a scene designer I'm thinking of actual light paths vs render paths). As long as there are enough bounces to allow a single reflection / scatter illumination, I think that would cover most games like Elite.Depends on how you want to do the counting....
A second bounce would be reflecting off the glass of your pilots visor, with the ray splitting again for refraction and reflection.
A raytraced Elite would be EPYC.Elite would be THE game for ray tracing.
When you are on a planet surface for example, and see half of the star above the horizon, yet the ground is not lit yet is the prime example, but otherwise this game sorely needs multiple light sources and tidbits like shadows of the ring systems.
In short it´s all about BVH.So here's a question for you graphic gurus - the ray tracing I'm familiar with (generating photorealistic scenes from software like RenderMan) requires the entire scene to be used, not just what's put on the screen. An example of this would be when we fly towards the sun at a certain angle, we should see our own reflections in our cockpit glass as the bright sun reflects off our holomes. This is why it can take hours if not days (depending on your hardware) to render a single frame. I have to think that the video game version of raytracing is "raytracing lite", because this seems like a tremendous burden to handle in real time.
In a game like Elite, you could probably get away with multi-pass selective ray-tracing, where things inside the cockpit are illuminated by local stars and perhaps even close planetshine, but not every light coming from inside a station... I'm just really curious now what video game raytracing is and how it's done compared to "real" raytracing (the latter sometimes taking shortcuts when LOTS of light sources are in play).
It´s all about volume. 2-3 years from now the next generation is already rolling over with even more RTX SKUs. And 2060 may be the best selling card if not 1660ti of this generation. RTX sales is also currently accelerating. We know Intel is very likely to support Tensor cores and the question is if they are able to support RT cores in any way as well. AMD at this point must be considered gone just as the stillborn Radeon 7 with its limited production run. Intel will replace AMD in the bottom in 2020 and after that any AMD presence will be symbolic at best.no way will that happen... not in 2-3 years time. there will be a non ray tracing lighting option for the forseeable future imo. my fear is it will be like physx, a fantastic thing which never takes off outside of a few fantastic examples because it is limited to certain gpus.
i HOPE this is not the case and what we see is that in 2 - 3 years time raytracing is ubiquitous..... but it will be as an option and not mandatory, i would bet money on that.
(2-3 years will also be perfect timing for me to upgrade my 1080ti to a gpu which is capable of it without bringing it too its knees )
remember nvidia are still bringing out brand new GTX cards and they are fairly high level, and AMD are as well.... these will not be forced into obsolescence in such a short time.
The PS5 will support raytracing as much as any older card. It doesn't have RT cores or Tensor cores. It's a gimmick checkbox. Just as it´s 8K support (or even 4K).I think you massively under estimate the power of the console, and AMD have 2 of the big 3 consoles wrapped up.
AMD ARE supporting ray tracing as well however navi onwards of course. the 1660ti IS going to be a good seller and that is exactly why i think your ray tracing only games is not going to happen.
1660ti is not RTX capable.
but hey its all guesswork, we should bookmark and revisit in later 2021 / early 2022
Navi is not a new architecture. It's the same GCN as usual. The first new architecture rumoured from AMD is post Navi and called Arcturus. However the RTG division in AMD have close to no R&D and all the R&D money is spend on more or less direct shrinks with huge IC and mask cost due to the mistake of going 7nm that drives up cost massively with little to no benefit.You probably know more than me on this but, has navis ray tracing ability been confirmed yet? I thought Navi was a whole new architecture so isn't it possible it will be able to make a decent shout at ray tracing? (Like I say maybe I have missed some info regarding Navi)
When was the last time a console could live up to it´s paper specs? 2Tflops PS3? 4K PS4? Even 1080P60FPS was hard to reach. Also the supplying company doesn´t have the technology or the R&D currently. If the PS5 was a 2023+ release and AMD started to ramp RTGs R&D budget today with dedicating the entire company R&D to it. Then sure it would be possible.For some random guy in the forum, you sure do [pretend to] know alot about a product that hasn't even officially been announced yet. I think I'll wait for Sony's keynote and press releases.
Lower power requirements allowing for wider applications (such as laptops, custom APU designs etc) is quite a benefit, just not one in a niche feature in expensive GPUs. AMD is going for the integrated and low / mid tier market where most money is made. There is a reason why nV brought out the 1160.due to the mistake of going 7nm that drives up cost massively with little to no benefit.
Vega 20 managed 73Mhz before it was further reduced and twice the HBM2 stacks from going from 14nm to 7nm while still being a 300W toaster. Plus it's cost went up significantly so even at 700$ it was sold with 100-200$ loss.. The biggest problem with lower nodes is that they cost more transistor for transistor after 14nm. So even plain shrinks increase cost.Lower power requirements allowing for wider applications (such as laptops, custom APU designs etc) is quite a benefit, just not one in a niche feature in expensive GPUs. AMD is going for the integrated and low / mid tier market where most money is made. There is a reason why nV brought out the 1160.