Intro to gfx-hal • Part 1: Drawing a triangle

栏目: IT技术 · 发布时间: 4年前

内容简介:This is Part 1 of a series of tutorials on graphics programming in Rust, usingIf you’re reading this, I assume you want to learnThe very simplest thing I can think to do with a graphics API is to draw a single triangle. You might expect that to be quick an
·Learning gfx-hal

This is Part 1 of a series of tutorials on graphics programming in Rust, using gfx-hal .

If you’re reading this, I assume you want to learn gfx-hal . I will also callously assume you know a little OpenGL, or something similar, and have at least a rough idea of how shaders work. At least, this was where I was before I wrote these tutorials - so hopefully if you fit that category, you’ll find them useful.

The very simplest thing I can think to do with a graphics API is to draw a single triangle. You might expect that to be quick and easy. Not so! Like Vulkan (the API it is based on), gfx requires a lot of set up to render anything at all. I’ve tried to make the code concise, but the fact remains that Part 1 of this series will likely be the longest and most complex entry.

I would encourage you to stick with it though! Once you have this foundation to build on, everything after will come more easily.

This page includes all the code you need to get set up and drawing your first triangle. Even if you do nothing but copy and paste, you’ll have something working at the end to play with.

You can also find the full code for this part, with comments, here: part-1-triangle .

If you’re looking for more information about the various functions and parameters used, you’ll likely find it in those comments. Of course you can also check the gfx-hal documentation .

One final disclaimer: though this code should run on Windows, macOS, and Linux - I currently only have the means to test on a mac (with the Metal backend). If you run into any issues on other platforms, please let me know!

With all that said, let me tell you how to draw One Triangle:

Shaders

Where possible in these tutorials, I’d like to start with the shaders. I want to first focus on what we’re going to draw, and then delve into the how.

Our shaders are written in GLSL, which we’ll compile to SPIR-V - the format gfx uses - at runtime.

So first off, here’s the vertex shader:

// shaders/part-1.vert
#version 450
#extension GL_ARB_separate_shader_objects : enable

void main() {
    vec2 position;
    if (gl_VertexIndex == 0) {
        position = vec2(0.0, -0.5);
    } else if (gl_VertexIndex == 1) {
        position = vec2(-0.5, 0.5);
    } else if (gl_VertexIndex == 2) {
        position = vec2(0.5, 0.5);
    }

    gl_Position = vec4(position, 0.0, 1.0);
}

It’s so simple that it doesn’t even have any inputs. Instead we hardcode the three vertices of the triangle, and use the gl_VertexIndex build-in to set the position based on which vertex we’re on.

Next is our fragment shader:

// shaders/part-1.frag
#version 450
#extension GL_ARB_separate_shader_objects : enable

layout(location = 0) out vec4 fragment_color;

void main() {
    fragment_color = vec4(0.5, 0.5, 1.0, 1.0);
}

It’s also simple. All it does is output a nice lilac color. Hopefully now you can imagine what the final image will look like. (Or you’ve seen a thumbnail image and you already know.) Now let’s create a gfx application to render this image.

Setup

The first thing we have to do is create a new Rust project ( cargo new gfx-hal-tutorials ) and edit the default Cargo.toml file:

# Cargo.toml
[package]
name = "gfx-hal-tutorials"
version = "0.1.0"
edition = "2018"
license = "CC0-1.0"

[dependencies]
bincode = "~1.2.1"
gfx-hal = "=0.5.0"
glsl-to-spirv = "=0.1.7"
image = "~0.22.4"
serde = { version = "~1.0.104", features = ["derive"] }
winit = "~0.20.0"

[target.'cfg(target_os = "macos")'.dependencies.backend]
package = "gfx-backend-metal"
version = "=0.5.1"

[target.'cfg(windows)'.dependencies.backend]
package = "gfx-backend-dx12"
version = "=0.5.0"

[target.'cfg(all(unix, not(target_os = "macos")))'.dependencies.backend]
package = "gfx-backend-vulkan"
version = "=0.5.1"

There are a few dependencies above that aren’t used by this tutorial, but will be in future parts. Aside from those we have:

  • The winit crate for window management.

  • The gfx-hal crate which defines the traits that the backends implement.

  • And the gfx-backend-* crates, each representing an available graphics backend.

The bizarre [target.'cfg(…​)'.dependencies.backend] syntax allows us to easily be generic across different operating systems. What this does is change the contents of the backend crate depending on the OS. So it points to gfx-backend-metal for macos , and so on.

Our code can just use the magic backend crate and should work the same on all of these platforms.

The next thing to do is start putting code in our main.rs to initialize our graphics resources.

Initialization

One of the nice things (from a performance and correctness standpoint) about gfx is that a lot of the work is frontloaded. However, that means there’s a lot of initialization to do - lots of selecting, creating, and configuring resources. But by contrast, when the initialization is done, the actual rendering is very simple. It’s basically just filling command buffers with commands and submitting them to a command queue.

So here’s a quick summary of the initialization that’s required to do that. We will need:

  • A window to display our rendered image.

  • A backend instance to access the graphics API. This gives us access to:

    • A surface on which to render, and then present to the window.

    • An adapter which represents a physical device (like a graphics card).

  • One or more queue groups , which give us access to command queues . (More on that soon.)

  • A device , which is a logical device we obtain by configuring the adapter . This will be used to create most of the rest of our resources, including:

    • A command pool for allocating command buffers to send instructions to the command queues. (We’ll talk about what that means later.)

    • A render pass which defines how different images are used. (For example, which to render to, which is a depth buffer, etc.)

    • A graphics pipeline which contains our shaders and specifies how exactly to render each triangle.

    • And finally, a fence and a semaphore for synchronizing our program. (More on that later.)

It’s a lot, but at least you only need to do it once.

Creating a window

The very first thing for us to do is define a main function:

// src/main.rs (or other binary)
fn main() {
    use std::mem::ManuallyDrop;

    use gfx_hal::{
        device::Device,
        window::{Extent2D, PresentationSurface, Surface},
        Instance,
    };
    use glsl_to_spirv::ShaderType;

    const APP_NAME: &'static str = "Part 1: Drawing a triangle";
    const WINDOW_SIZE: [u32; 2] = [512, 512];

    let event_loop = winit::event_loop::EventLoop::new();

    // ...
}

You’ll notice we imported a few common traits and structs from the gfx_hal crate. In general, throughout this tutorial I’ll try to keep imports close to where they are used, but for the more common items, it makes sense to import them up-front.

The gfx_hal crate itself is mostly agnostic to the windowing library you use with it. Here we’re going to use winit , and every winit program starts with creating an EventLoop . We can use the event loop to create our window.

You’ll also notice that we defined a constant for the WINDOW_SIZE above, but before we can actually create a window, there’s some subtleties to address when it comes to resolution. I feel the winit docs explain this better than I ever could, but I’ll give it a try. Feel free to read the winit docs and skip this next paragraph though.

High-DPI displays, to avoid having unusably small UI elements, pretend to have a smaller size than they actually do. For example, a screen 2048 physical pixels wide may report a logical size of 1024, along with a scale factor of 2. This means that a 1024 pixel window will fill the whole screen, because the OS will scale it up by 2 under the hood to cover all 2048 pixels. It also means that on my other, more ancient 1024 pixel monitor with a scale factor of just 1, the window will appear to be the same size, without me having to configure the window differently.

So physical size represents real life pixels, and varies a lot across different devices, while logical size is an abstraction representing a smaller size which is more consistent between devices.

let (logical_window_size, physical_window_size) = {
        use winit::dpi::{LogicalSize, PhysicalSize};

        let dpi = event_loop.primary_monitor().scale_factor();
        let logical: LogicalSize<u32> = WINDOW_SIZE.into();
        let physical: PhysicalSize<u32> = logical.to_physical(dpi);

        (logical, physical)
    };

The physical size is what we’re concerned with when it comes to rendering, as we want our rendering surface to cover every pixel. We’ll create an Extent2D structure of this size which several gfx methods will require later:

let mut surface_extent = Extent2D {
        width: physical_window_size.width,
        height: physical_window_size.height,
    };

For constructing the window itself however, we want to use the logical size so that it appears consistent across different display densities:

let window = winit::window::WindowBuilder::new()
        .with_title(APP_NAME)
        .with_inner_size(logical_window_size)
        .build(&event_loop)
        .expect("Failed to create window");

Before we do anything else, let’s jump ahead and set up our main event loop so we can see our window open:

// This will be very important later! It must be initialized to `true` so
    // that we rebuild the swapchain on the first frame.
    let mut should_configure_swapchain = true;

    // Note that this takes a `move` closure. This means it will take ownership
    // over any resources referenced within. It also means they will be dropped
    // only when the application is quit.
    event_loop.run(move |event, _, control_flow| {
        use winit::event::{Event, WindowEvent};
        use winit::event_loop::ControlFlow;

        match event {
            Event::WindowEvent { event, .. } => match event {
                WindowEvent::CloseRequested => *control_flow = ControlFlow::Exit,
                WindowEvent::Resized(dims) => {
                    surface_extent = Extent2D {
                        width: dims.width,
                        height: dims.height,
                    };
                    should_configure_swapchain = true;
                }
                WindowEvent::ScaleFactorChanged { new_inner_size, .. } => {
                    surface_extent = Extent2D {
                        width: new_inner_size.width,
                        height: new_inner_size.height,
                    };
                    should_configure_swapchain = true;
                }
                _ => (),
            },
            Event::MainEventsCleared => window.request_redraw(),
            Event::RedrawRequested(_) => {
                // Here's where we'll perform our rendering.
            }
            _ => (),
        }
    });

(Note the should_configure_swapchain variable. The swapchain is a chain of images for rendering onto. Each frame, one of those images is displayed onscreen. I’ll explain more about this later - for now just make sure you set this variable to true .)

As for the rest of it, we’re passing a closure to event_loop.run(…​) . This closure is where we’ll handle all of our input events, and also where we’ll instruct gfx to render our scene.

To quickly summarize the events we’re handling here:

  • CloseRequested : This happens when a user clicks the ‘X’ on the window. We use ControlFlow::Exit to signal our application to stop.

  • Resized : This happens when a user resizes the window. We want to make sure to store the new size and set should_configure_swapchain to true , because this will change the dimensions of our underlying surface.

  • ScaleFactorChanged : This could happen if the user drags the window onto a monitor with a different DPI setting. This also changes the underlying surface dimensions, so we do the same as above.

  • MainEventsCleared : This happens every frame once other input events have been handled. Here is where you would perform the non-rendering logic of your application - but all we want to do is request a redraw.

  • RedrawRequested : As the name implies, this event happens when we request a redraw. Here’s where we’ll put our rendering logic once we’re ready.

Now you should be able to run the app and see an empty window. I hope you like looking at it, because it’s all you’re going to see until the very last moment of this tutorial. It’s a good idea to run the program after each change though, just to make sure there are no crashes.

So now we have a window. If we want to be able to draw a triangle, we’re going to have to talk to the GPU.

Graphics resources

As we’re still in the process of initialization, this must all take place before the event_loop.run(…​) call.

Our very first call to gfx will be to create an Instance which serves as an entrypoint to the backend graphics API. We use this only to acquire a surface to draw on, and an adapter which represents a physical graphics device (e.g. a graphics card):

let (instance, surface, adapter) = {
        let instance = backend::Instance::create(APP_NAME, 1).expect("Backend not supported");

        let surface = unsafe {
            instance
                .create_surface(&window)
                .expect("Failed to create surface for window")
        };

        let adapter = instance.enumerate_adapters().remove(0);

        (instance, surface, adapter)
    };

Next we want to acquire a logical device which will allow us to create the rest of our resources. You can think of a logical device as a particular configuration of a physical device - with or without certain features enabled.

We also want a queue_group to give us access to command queues so we can later give commands to the GPU. There are different families of queues with different capabilities. Our only requirements are:

  1. That the queues are compatible with our surface, and

  2. That the queues support graphics commands.

Once we select an appropriate queue_family , we can obtain both our device, and our queue group:

let (device, mut queue_group) = {
        use gfx_hal::queue::QueueFamily;

        let queue_family = adapter
            .queue_families
            .iter()
            .find(|family| {
                surface.supports_queue_family(family) && family.queue_type().supports_graphics()
            })
            .expect("No compatible queue family found");

        let mut gpu = unsafe {
            use gfx_hal::adapter::PhysicalDevice;

            adapter
                .physical_device
                .open(&[(queue_family, &[1.0])], gfx_hal::Features::empty())
                .expect("Failed to open device")
        };

        (gpu.device, gpu.queue_groups.pop().unwrap())
    };

Command buffers

As previously mentioned, in order to render anything, we have to send commands to the GPU via a command queue. To do this efficiently, we batch those commands together in a structure called a command buffer . These command buffers are allocated from a command pool .

We create a command_pool below, passing the family of our queue group in so that the buffers allocated from it are compatible with those queues, We then allocate a single primary (non-nested) command_buffer from it which we will re-use each frame:

let (command_pool, mut command_buffer) = unsafe {
        use gfx_hal::command::Level;
        use gfx_hal::pool::{CommandPool, CommandPoolCreateFlags};

        let mut command_pool = device
            .create_command_pool(queue_group.family, CommandPoolCreateFlags::empty())
            .expect("Out of memory");

        let command_buffer = command_pool.allocate_one(Level::Primary);

        (command_pool, command_buffer)
    };

Now we’re able to send commands - but we haven’t yet talked about what those commands look like.

The gfx-hal library adopts a model very similar to the Vulkan API, where a typical command buffer might look something like:

  1. Begin the command buffer

  2. Begin a render pass

  3. Bind a pipeline (and potentially other state, like vertex buffers etc.)

  4. Draw some vertices (usually as triangles)

  5. End the render pass

  6. Finish the command buffer

A render pass is an object that describes how images should be used while rendering. When you hear images , you may be thinking of textures - but this also applies to images such as the surface of the window, and the depth buffer. If you were rendering to multiple different images, you would need multiple render passes. We don’t need to bother with that right now - but we still need a single render pass to draw anything at all.

A pipeline is probably the most important and complex object we’ll be dealing with in these tutorials. Pipelines define almost all of the rendering process, including the shaders, type of primitive to draw (triangles, lines, etc.), the inputs to use (uniforms, textures), and so on. You can bind it in a command buffer, and it will affect everything you draw until another pipeline is bound.

So in order to build a useful command buffer, we’ll need to create a render pass and a pipeline. Let’s start with the render pass.

Render passes

The first thing we need for the render pass is a color format - the format of each pixel in the image. Different displays and graphics cards might support different formats - imagine in the extreme a grayscale display that only supports one color channel. We want to pick one compatible with both our surface and device:

let surface_color_format = {
        use gfx_hal::format::{ChannelType, Format};

        let supported_formats = surface
            .supported_formats(&adapter.physical_device)
            .unwrap_or(vec![]);

        let default_format = *supported_formats.get(0).unwrap_or(&Format::Rgba8Srgb);

        supported_formats
            .into_iter()
            .find(|format| format.base_format().1 == ChannelType::Srgb)
            .unwrap_or(default_format)
    };

We get a list of supported formats and try to pick the first one that supports SRGB (so gamma correction is handled for us). Failing that, we default to whatever format comes first. If the surface doesn’t return us any supported formats - that means we can choose whatever we want, so we choose Rgba8Srgb .

With that, we can create our render pass. It’s going to comprise one color attachment and one subpass .

You can think of an attachment as a slot for an image to fill. The color attachment is what we’ll be rendering to. Whatever image is bound to that attachment when we render with this render pass is the image we will be rendering to.

A subpass defines a subset of those attachments to use. If we wanted to change which attachment was the color attachment in the middle of our render pass, we could use a second subpass to do this (though there are restrictions). You need at least one subpass, and that’s all we’ll provide:

let render_pass = {
        use gfx_hal::image::Layout;
        use gfx_hal::pass::{
            Attachment, AttachmentLoadOp, AttachmentOps, AttachmentStoreOp, SubpassDesc,
        };

        let color_attachment = Attachment {
            format: Some(surface_color_format),
            samples: 1,
            ops: AttachmentOps::new(AttachmentLoadOp::Clear, AttachmentStoreOp::Store),
            stencil_ops: AttachmentOps::DONT_CARE,
            layouts: Layout::Undefined..Layout::Present,
        };

        let subpass = SubpassDesc {
            colors: &[(0, Layout::ColorAttachmentOptimal)],
            depth_stencil: None,
            inputs: &[],
            resolves: &[],
            preserves: &[],
        };

        unsafe {
            device
                .create_render_pass(&[color_attachment], &[subpass], &[])
                .expect("Out of memory")
        }
    };

Note that the subpass lists index 0 in the colors field. This index refers to the list of attachments passed into create_render_pass and means we’re using the first (index 0 ) attachment as a color attachment.

Pipelines

Next, we’re going to define our rendering pipeline. This starts with the pipeline layout, which is very simple for our case:

let pipeline_layout = unsafe {
        device
            .create_pipeline_layout(&[], &[])
            .expect("Out of memory")
    };

Ordinarily this would define the kind of resources and constants we want to make available to our pipeline while rendering. Things like textures and matrices required by our shaders. Of course, our shaders are so simple they don’t require such finery, so we just pass empty slices.

Speaking of shaders:

let vertex_shader = include_str!("shaders/part-1.vert");
    let fragment_shader = include_str!("shaders/part-1.frag");

This includes both shaders as static strings within our program. Before we move on to the pipeline though, we’re going to define one of the few actual functions we’ll be writing in these tutorials.

If you remember, these shaders are written in GLSL - which gfx-hal doesn’t support directly. To use them, we’ll have to first compile them to SPIR-V - a more efficient intermediate representation.

Luckily, there is a crate, glsl-to-spirv , which can do that for us - even if it is a little fiddly. (It’s not usually something you would do on the fly.)

We have two shaders to compile and I don’t like doing things twice, so naturally:

/// Compile some GLSL shader source to SPIR-V.
    fn compile_shader(glsl: &str, shader_type: ShaderType) -> Vec<u32> {
        use std::io::{Cursor, Read};

        let mut compiled_file =
            glsl_to_spirv::compile(glsl, shader_type).expect("Failed to compile shader");

        let mut spirv_bytes = vec![];
        compiled_file.read_to_end(&mut spirv_bytes).unwrap();

        let spirv = gfx_hal::pso::read_spirv(Cursor::new(&spirv_bytes)).expect("Invalid SPIR-V");

        spirv
    }

Here we call glsl_to_spirv::compile to compile our GLSL source into a SPIR-V file, which we immediately read back into memory. (I did say it was fiddly.) We then pass read_spirv a view of this data which will ensure it is correctly aligned to 4-bytes (hence the u32 in the return type). The resulting Vec contains the SPIR-V data we need for our pipeline.

Now for the pipeline itself - the most complex structure we’ll be building today. In future we may have multiple pipelines as well, so let’s define another function:

/// Create a pipeline with the given layout and shaders.
    unsafe fn make_pipeline<B: gfx_hal::Backend>(
        device: &B::Device,
        render_pass: &B::RenderPass,
        pipeline_layout: &B::PipelineLayout,
        vertex_shader: &str,
        fragment_shader: &str,
    ) -> B::GraphicsPipeline {
        use gfx_hal::pass::Subpass;
        use gfx_hal::pso::{
            BlendState, ColorBlendDesc, ColorMask, EntryPoint, Face, GraphicsPipelineDesc,
            GraphicsShaderSet, Primitive, Rasterizer, Specialization,
        };
        todo!()
    };

There are a couple of things worth mentioning about this already. The first is that we’ve written it to be generic across any backend. This not only makes the function more portable, but also makes it easier to write the types of the input parameters (e.g. B::Device instead of the specific Device struct from every single backend).

The second thing to note is that we’re passing in a specific render pass. This is because each pipeline is defined only for one render pass. If you need to use the same setup in different render passes, you unfortunately need to make two identical pipelines.

Now let’s start filling in the body of this function. The first thing we want to do is compile our shaders, create entry points for them, and then create a shader set :

// fn make_pipeline(...) {
        let vertex_shader_module = device
            .create_shader_module(&compile_shader(vertex_shader, ShaderType::Vertex))
            .expect("Failed to create vertex shader module");

        let fragment_shader_module = device
            .create_shader_module(&compile_shader(fragment_shader, ShaderType::Fragment))
            .expect("Failed to create fragment shader module");

        let (vs_entry, fs_entry) = (
            EntryPoint {
                entry: "main",
                module: &vertex_shader_module,
                specialization: Specialization::default(),
            },
            EntryPoint {
                entry: "main",
                module: &fragment_shader_module,
                specialization: Specialization::default(),
            },
        );

        let shader_entries = GraphicsShaderSet {
            vertex: vs_entry,
            hull: None,
            domain: None,
            geometry: None,
            fragment: Some(fs_entry),
        };

You’ll notice we had to create a shader module for each shader first. This is so shaders can be re-used across different pipelines, but we won’t be doing that now.

The EntryPoint struct is exactly what it sounds like - it defines how your shader begins executing. We’ll ignore specialization for now, but the entry parameter is just the name of the entry point function. (Like fn main() in Rust.)

Finally, the GraphicsShaderSet defines which shader stages are used, and which shaders to use for them. For now, we only have a vertex and fragment shader to supply.

We can now begin to configure the pipeline:

let mut pipeline_desc = GraphicsPipelineDesc::new(
            shader_entries,
            Primitive::TriangleList,
            Rasterizer {
                cull_face: Face::BACK,
                ..Rasterizer::FILL
            },
            pipeline_layout,
            Subpass {
                index: 0,
                main_pass: render_pass,
            },
        );

        pipeline_desc.blender.targets.push(ColorBlendDesc {
            mask: ColorMask::ALL,
            blend: Some(BlendState::ALPHA),
        });

As mentioned, pipelines can get fairly complex. We use the new function to create a bare-bones pipeline, defining the shaders to use, the primitive to render, and that we wish to cull back-faces. We also supply our pipeline layout and render pass. Now we can extend this configuration by modifying other fields.

The only thing we add for now is a color target. This ColorBlendDesc is now the only target in the list, and therefore has index 0 . This means that it’s telling us how to write color to color attachment 0 in the render pass. With ColorMask::ALL we say we’re writing to all color channels, and with BlendState::ALPHA we say we want alpha blending where pixels overlap.

The last thing to do is to create the pipeline, destroy the shader modules (as we don’t plan to re-use them), and then return the pipeline:

let pipeline = device
            .create_graphics_pipeline(&pipeline_desc, None)
            .expect("Failed to create graphics pipeline");

        device.destroy_shader_module(vertex_shader_module);
        device.destroy_shader_module(fragment_shader_module);

        pipeline

Then we simply call the function with our resources and shaders:

let pipeline = unsafe {
        make_pipeline::<backend::Backend>(
            &device,
            &render_pass,
            &pipeline_layout,
            vertex_shader,
            fragment_shader,
        )
    };

Synchronization primitives

The last two resources to create are synchronization primitives. The GPU can execute in parallel to the CPU, so we need some way of ensuring that they don’t interfere with each other.

The first thing to create is a submission_complete_fence . A fence allows the CPU to wait for the GPU . In our case, we’re going to use it to wait for the command buffer we submit to be available for writing again.

The next is a rendering_complete_semaphore . A semaphore allows you to synchronize different processes within the GPU. In our case we’re going to use it to tell the GPU to wait until the frame has finished rendering before displaying it onscreen.

let submission_complete_fence = device.create_fence(true).expect("Out of memory");
    let rendering_complete_semaphore = device.create_semaphore().expect("Out of memory");

We’ll go into more detail with these when we start using them.

Memory management

We have now created everything that we need to start rendering. But here’s the part that sucks: we have to clean up after ourselves. This wouldn’t be so bad if not for a specific intersection of two things. Namely that winit takes ownership over our resources and drops them, but gfx requires us to manually delete them (which we can’t do because they’ve been moved).

The neatest solution (that I can think of) is to wrap our resources in a struct with a Drop implementation to clean them up.

So first of all we’ll group everything we need to destroy into one struct. As a rule of thumb, if you called a function called create_<something> , then the something should go here:

struct Resources<B: gfx_hal::Backend> {
        instance: B::Instance,
        surface: B::Surface,
        device: B::Device,
        render_passes: Vec<B::RenderPass>,
        pipeline_layouts: Vec<B::PipelineLayout>,
        pipelines: Vec<B::GraphicsPipeline>,
        command_pool: B::CommandPool,
        submission_complete_fence: B::Fence,
        rendering_complete_semaphore: B::Semaphore,
    }

I expect we’ll be making more render passes, pipeline layouts, and pipelines in later parts, so I’m jumping the gun and putting them in a Vec so we don’t have to update the struct definition each time we add one. It’s a pretty lazy solution but it’ll do for now.

Unfortunately, we can’t implement Drop for this struct directly. This is because the signature of drop takes a &mut self parameter, while the signatures of the destroy_<something> functions take a self parameter (meaning that they want to take ownership of self ).

So we need a way to move our resources out of a &mut reference. One way to do this is to put our resources in a ManuallyDrop , and use the take method to pull out the contents:

struct ResourceHolder<B: gfx_hal::Backend>(ManuallyDrop<Resources<B>>);

    impl<B: gfx_hal::Backend> Drop for ResourceHolder<B> {
        fn drop(&mut self) {
            unsafe {
                let Resources {
                    instance,
                    mut surface,
                    device,
                    command_pool,
                    render_passes,
                    pipeline_layouts,
                    pipelines,
                    submission_complete_fence,
                    rendering_complete_semaphore,
                } = ManuallyDrop::take(&mut self.0);

                device.destroy_semaphore(rendering_complete_semaphore);
                device.destroy_fence(submission_complete_fence);
                for pipeline in pipelines {
                    device.destroy_graphics_pipeline(pipeline);
                }
                for pipeline_layout in pipeline_layouts {
                    device.destroy_pipeline_layout(pipeline_layout);
                }
                for render_pass in render_passes {
                    device.destroy_render_pass(render_pass);
                }
                device.destroy_command_pool(command_pool);
                surface.unconfigure_swapchain(&device);
                instance.destroy_surface(surface);
            }
        }
    }

Now we can instantiate this struct, which will be moved into the event loop and dropped when the program exits, calling all of our destructors and cleaning up our resources:

let mut resource_holder: ResourceHolder<backend::Backend> =
        ResourceHolder(ManuallyDrop::new(Resources {
            instance,
            surface,
            device,
            command_pool,
            render_passes: vec![render_pass],
            pipeline_layouts: vec![pipeline_layout],
            pipelines: vec![pipeline],
            submission_complete_fence,
            rendering_complete_semaphore,
        }));

The worst is now over! I promise! We’re in the home stretch now: it’s time to write our per-frame rendering code.

Rendering

First, let’s return to our RedrawRequested event and prepare a few things:

Event::RedrawRequested(_) => {
                let res: &mut Resources<_> = &mut resource_holder.0;
                let render_pass = &res.render_passes[0];
                let pipeline = &res.pipelines[0];

                // ...

Our Resources struct is holding all of the important things we want to use. The above code gives us easy access to them via the res reference.

We’ll also pull the pipeline out of the list we stored it in so we can still refer to it by a nice name.

Next, we’ll see our first use of the fence we created. We’re about to reset our command buffer - which would be terrible if the commands hadn’t been submitted to the GPU yet. So what we’ll do is wait for the fence before we reset it, and later when we submit the command buffer, we’ll tell it to signal the fence once it’s done. That means that we can’t progress past this part until the submission is complete.

(Except we also added a timeout - but that’s specifically to avoid hanging in cases where the fence doesn’t get signalled for whatever reason.)

Once we’re clear, we reset the fence, and we also reset the command pool - which clears the buffers allocated from it:

unsafe {
                    use gfx_hal::pool::CommandPool;

                    // We refuse to wait more than a second, to avoid hanging.
                    let render_timeout_ns = 1_000_000_000;

                    res.device
                        .wait_for_fence(&res.submission_complete_fence, render_timeout_ns)
                        .expect("Out of memory or device lost");

                    res.device
                        .reset_fence(&res.submission_complete_fence)
                        .expect("Out of memory");

                    res.command_pool.reset(false);
                }

Swapchain

Next up, we’re going to configure the swapchain. What’s this swapchain thing, you ask? Well it’s a chain of images that we can render onto and then present to our window. While we’re showing one of them on screen, we can render to a different one. Then once we’re done rendering, we can swap them.

This is one of the few places where gfx departs significantly from the Vulkan API. In Vulkan, you create and manage the swapchain yourself. In gfx , the surface mostly does it for you. You can read more about the decision behind that here .

All we have to do is re-configure the swapchain whenever it’s invalidated (for example, when the application starts, or when the window resizes). Remember the should_configure_swapchain variable we declared? I hope you initialized it to true , because this is how we make sure it’s ready for the first frame:

if should_configure_swapchain {
                    use gfx_hal::window::SwapchainConfig;

                    let caps = res.surface.capabilities(&adapter.physical_device);

                    let mut swapchain_config =
                        SwapchainConfig::from_caps(&caps, surface_color_format, surface_extent);

                    // This seems to fix some fullscreen slowdown on macOS.
                    if caps.image_count.contains(&3) {
                        swapchain_config.image_count = 3;
                    }

                    surface_extent = swapchain_config.extent;

                    unsafe {
                        res.surface
                            .configure_swapchain(&res.device, swapchain_config)
                            .expect("Failed to configure swapchain");
                    };

                    should_configure_swapchain = false;
                }

First we get the capabilities of the surface - which is exactly what it sounds like: the supported swapchain configuration parameters. Then we pass this, the surface format, and the desired extent (physical size of the images in the swapchain) to the SwapchainConfig::from_caps method. This returns a swapchain_config .

We can modify this config, within the limits of the surface capabilities, then call configure_swapchain to update our surface’s swapchain. We also store the surface_extent that was returned in our swapchain_config - just in case it’s different from the desired size that we provided.

The swapchain is now ready. To start rendering, we’ll need to acquire an image from it. This will return us an image in the chain that is ready to be used (meaning it is not currently being displayed onscreen):

let surface_image = unsafe {
                    // We refuse to wait more than a second, to avoid hanging.
                    let acquire_timeout_ns = 1_000_000_000;

                    match res.surface.acquire_image(acquire_timeout_ns) {
                        Ok((image, _)) => image,
                        Err(_) => {
                            should_configure_swapchain = true;
                            return;
                        }
                    }
                };

Next we create a framebuffer . This is what actually connects images (like the one we got from our swapchain) to attachments within the render pass (like the one color attachment we specified). The attachments of the render pass is like a set of slots, while a framebuffer is a set of images to fill those slots:

let framebuffer = unsafe {
                    use std::borrow::Borrow;

                    use gfx_hal::image::Extent;

                    res.device
                        .create_framebuffer(
                            render_pass,
                            vec![surface_image.borrow()],
                            Extent {
                                width: surface_extent.width,
                                height: surface_extent.height,
                                depth: 1,
                            },
                        )
                        .unwrap()
                };

The very last thing to create before we start recording commands is the viewport. This is just a structure defining an area of the window, which can be used to clip (scissor) or scale (viewport) the output of your rendering. We’re going to render to the whole window, so we create a viewport the size of the surface_extent :

let viewport = {
                    use gfx_hal::pso::{Rect, Viewport};

                    Viewport {
                        rect: Rect {
                            x: 0,
                            y: 0,
                            w: surface_extent.width as i16,
                            h: surface_extent.height as i16,
                        },
                        depth: 0.0..1.0,
                    }
                };

Graphics commands

Everything is ready now - all that’s left is to record our commands and submit them.

A command buffer must always start with a begin command, so let’s do that. We’ll also set the viewport and scissor rect to encompass the whole window:

unsafe {
                    use gfx_hal::command::{
                        ClearColor, ClearValue, CommandBuffer, CommandBufferFlags, SubpassContents,
                    };

                    command_buffer.begin_primary(CommandBufferFlags::ONE_TIME_SUBMIT);

                    command_buffer.set_viewports(0, &[viewport.clone()]);
                    command_buffer.set_scissors(0, &[viewport.rect]);

Next we begin the render pass. We tell it to clear the color attachment to black before rendering:

command_buffer.begin_render_pass(
                        render_pass,
                        &framebuffer,
                        viewport.rect,
                        &[ClearValue {
                            color: ClearColor {
                                float32: [0.0, 0.0, 0.0, 1.0],
                            },
                        }],
                        SubpassContents::Inline,
                    );

Next we bind our pipeline. Now any triangles we draw will be rendered with the settings and shaders of that pipeline:

command_buffer.bind_graphics_pipeline(pipeline);

Now the actual draw call itself. We’ve already bound everything we need. Our shaders even take care of the vertex positions, so all we need to tell the GPU is: “draw vertices 0..3 (0, 1, and 2) as a triangle”. That’s what this does:

command_buffer.draw(0..3, 0..1);

(You can ignore the 0..1 , that’s used for instanced rendering.)

Then finally, we can end the render pass, and our command buffer:

command_buffer.end_render_pass();
                    command_buffer.finish();
                }

Submission

The commands are ready to submit. We prepare a Submission , which simply contains the command buffers to submit, as well as a list of semaphores to signal once rendering is complete.

We submit this to our queue, and tell it to signal the fence once the submission is complete. (Remember this is how we know when we can reset the command buffer.):

unsafe {
                    use gfx_hal::queue::{CommandQueue, Submission};

                    let submission = Submission {
                        command_buffers: vec![&command_buffer],
                        wait_semaphores: None,
                        signal_semaphores: vec![&res.rendering_complete_semaphore],
                    };

                    queue_group.queues[0].submit(submission, Some(&res.submission_complete_fence));
                    // ...

Finally we call present_surface and pass our rendering_complete_semaphore . This will wait until the semaphore signals and then display the finished image on screen:

// ...
                    let result = queue_group.queues[0].present_surface(
                        &mut res.surface,
                        surface_image,
                        Some(&res.rendering_complete_semaphore),
                    );

                    should_configure_swapchain |= result.is_err();

                    res.device.destroy_framebuffer(framebuffer);
                }

For good measure, we check if there were any errors here, and if so, we reconfigure the swapchain next frame. It’s not exactly scientific, but it will hopefully pave over any temporary unforseen errors with the graphics context. We also clear up the framebuffer we created.

Now, at long last, after about 400 lines of code, our application will finally render something. Ready for it? Here it is:

Intro to gfx-hal • Part 1: Drawing a triangle

It sure is a triangle! Don’t get overwhelmed now, have a lie down if you need to.

Hopefully you found this useful - I know it was a lot to follow. The plus side is that the hardest part is over. Even if it does takes a few reads to absorb, it’ll be worth it when you see how simple it is to extend this one example to render more varied and complex things in laters parts of this tutorial.

Thanks for reading, and I hope you’ll enjoyPart 2 where we draw all kinds of different triangles.

  1. Thanks to this tutorial for giving me the idea. They use an array of points, which is a lot cleaner, but was breaking in mysterious ways for me on Metal. If anyone knows what that’s about, let me know. 

If you found this post useful and want to support me spending more time on things like this:


以上就是本文的全部内容,希望对大家的学习有所帮助,也希望大家多多支持 码农网

查看所有标签

猜你喜欢:

本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们

第三次浪潮

第三次浪潮

[美]阿尔文·托夫勒 / 黄明坚 / 中信出版集团 / 2018-7 / 79.00元

《第三次浪潮》是美国著名未来学家阿尔文•托夫勒的代表作之一。1980年出版之际,随即引起全球热评,堪称中国改革开放的指南。本书阐述了由科学技术发展引起的社会各方面的变化与趋势。托夫勒认为,人类迄今为止已经经历了两次浪潮文明的洗礼:第一次是农业革命,人类就此从原始渔猎时代进入以农业为基础的文明社会,并历经千年,直到工业革命的到来。随后,人类社会历时300年摧毁了落后的第一次浪潮文明,并在“二战”后1......一起来看看 《第三次浪潮》 这本书的介绍吧!

JSON 在线解析
JSON 在线解析

在线 JSON 格式化工具

在线进制转换器
在线进制转换器

各进制数互转换器

SHA 加密
SHA 加密

SHA 加密工具