Skip to content

Instantly share code, notes, and snippets.

@kulicuu
Last active March 18, 2022 17:23
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save kulicuu/0bf3a51624c57e14b32f5967d2990409 to your computer and use it in GitHub Desktop.
Save kulicuu/0bf3a51624c57e14b32f5967d2990409 to your computer and use it in GitHub Desktop.
Rust and Vulkan GPU rendering, vertex-buffer mapping, indexed-drawing, depth-buffering...
// Vulkan Scratch Routine 2400 ----------------- -- -- -- ---- -
use erupt::{
cstr,
utils::{self, surface},
vk, DeviceLoader, EntryLoader, InstanceLoader,
vk::{Device, MemoryMapFlags},
};
use cgmath::{Deg, Rad, Matrix4, Point3, Vector3, Vector4};
use std::{
ffi::{c_void, CStr, CString},
fs,
fs::{write, OpenOptions},
io::prelude::*,
mem::*,
os::raw::c_char,
ptr,
result::Result,
result::Result::*,
string::String,
sync::Arc,
thread,
time,
};
use smallvec::SmallVec;
use raw_window_handle::{HasRawWindowHandle, RawWindowHandle};
use memoffset::offset_of;
use simple_logger::SimpleLogger;
use winit::{
dpi::PhysicalSize,
event::{
Event, KeyboardInput, WindowEvent,
ElementState, StartCause, VirtualKeyCode,
DeviceEvent,
},
event_loop::{ControlFlow, EventLoop},
window::WindowBuilder,
window::Window
};
use structopt::StructOpt;
const TITLE: &str = "vulkan-routine-2400";
const FRAMES_IN_FLIGHT: usize = 2;
const LAYER_KHRONOS_VALIDATION: *const c_char = cstr!("VK_LAYER_KHRONOS_validation");
const SHADER_VERT: &[u8] = include_bytes!("../spv/s_300__.vert.spv");
const SHADER_FRAG: &[u8] = include_bytes!("../spv/s1.frag.spv");
#[repr(C)]
#[derive(Debug, Clone, Copy)]
struct FrameData {
present_semaphore: vk::Semaphore,
render_semaphore: vk::Semaphore,
command_pool: vk::CommandPool,
command_buffer: vk::CommandBuffer,
}
#[repr(C)]
#[derive(Debug, Clone, Copy)]
struct VertexV3 {
pos: [f32; 4],
color: [f32; 4],
}
#[repr(C)]
#[derive(Clone, Debug, Copy)]
struct UniformBufferObject {
model: Matrix4<f32>,
view: Matrix4<f32>,
proj: Matrix4<f32>,
}
#[derive(Debug, StructOpt)]
struct Opt {
#[structopt(short, long)]
validation_layers: bool,
}
// static mut log_300: Vec<String> = vec!();
unsafe extern "system" fn debug_callback(
_message_severity: vk::DebugUtilsMessageSeverityFlagBitsEXT,
_message_types: vk::DebugUtilsMessageTypeFlagsEXT,
p_callback_data: *const vk::DebugUtilsMessengerCallbackDataEXT,
_p_user_data: *mut c_void,
) -> vk::Bool32 {
let str_99 = String::from(CStr::from_ptr((*p_callback_data).p_message).to_string_lossy());
// log_300.push(str_99 );
eprintln!(
"{}",
CStr::from_ptr((*p_callback_data).p_message).to_string_lossy()
);
vk::FALSE
}
pub unsafe fn vulkan_routine_6300
()
{
println!("\n6300\n");
routine_pure_procedural();
}
// create a monolith sort of like we start with app.
// we are basically going to reproduce backup, but change names of stuff, and organize it better,
// in order to be able to turn it into an object.
unsafe fn routine_pure_procedural
()
{
let opt = Opt { validation_layers: true };
let event_loop = EventLoop::new();
let window = WindowBuilder::new()
.with_title(TITLE)
.with_resizable(false)
.build(&event_loop)
.unwrap();
let entry = Arc::new(EntryLoader::new().unwrap());
let application_name = CString::new("Vulkan-Routine-6300").unwrap();
let engine_name = CString::new("Peregrine").unwrap();
let app_info = vk::ApplicationInfoBuilder::new()
.application_name(&application_name)
.application_version(vk::make_api_version(0, 1, 0, 0))
.engine_name(&engine_name)
.engine_version(vk::make_api_version(0, 1, 0, 0))
.api_version(vk::make_api_version(0, 1, 0, 0));
let mut instance_extensions = surface::enumerate_required_extensions(&window).unwrap();
if opt.validation_layers {
instance_extensions.push(vk::EXT_DEBUG_UTILS_EXTENSION_NAME);
}
let mut instance_layers = Vec::new();
if opt.validation_layers {
instance_layers.push(LAYER_KHRONOS_VALIDATION);
}
let device_extensions = vec![
vk::KHR_SWAPCHAIN_EXTENSION_NAME,
vk::KHR_RAY_TRACING_PIPELINE_EXTENSION_NAME,
vk::KHR_RAY_QUERY_EXTENSION_NAME,
vk::KHR_DEFERRED_HOST_OPERATIONS_EXTENSION_NAME,
vk::KHR_ACCELERATION_STRUCTURE_EXTENSION_NAME,
vk::KHR_SHADER_FLOAT_CONTROLS_EXTENSION_NAME,
vk::KHR_SPIRV_1_4_EXTENSION_NAME,
vk::KHR_BUFFER_DEVICE_ADDRESS_EXTENSION_NAME,
vk::EXT_DESCRIPTOR_INDEXING_EXTENSION_NAME,
];
let mut device_layers = Vec::new();
if opt.validation_layers {
device_layers.push(LAYER_KHRONOS_VALIDATION);
}
let instance_info = vk::InstanceCreateInfoBuilder::new()
.application_info(&app_info)
.enabled_extension_names(&instance_extensions)
.enabled_layer_names(&instance_layers);
let instance = Arc::new(unsafe { InstanceLoader::new(&entry, &instance_info) }.unwrap());
let messenger = if opt.validation_layers {
let messenger_info = vk::DebugUtilsMessengerCreateInfoEXTBuilder::new()
.message_severity(
vk::DebugUtilsMessageSeverityFlagsEXT::VERBOSE_EXT
| vk::DebugUtilsMessageSeverityFlagsEXT::WARNING_EXT
| vk::DebugUtilsMessageSeverityFlagsEXT::ERROR_EXT,
)
.message_type(
vk::DebugUtilsMessageTypeFlagsEXT::GENERAL_EXT
| vk::DebugUtilsMessageTypeFlagsEXT::VALIDATION_EXT
| vk::DebugUtilsMessageTypeFlagsEXT::PERFORMANCE_EXT,
)
.pfn_user_callback(Some(debug_callback));
instance.create_debug_utils_messenger_ext(&messenger_info, None).expect("problem creating debug util.");
} else {
Default::default()
};
let surface = surface::create_surface(&instance, &window, None).unwrap();
let (physical_device, queue_family, format, present_mode, device_properties) =
instance.enumerate_physical_devices(None)
.unwrap()
.into_iter()
.filter_map(|physical_device| {
let queue_family = match instance
.get_physical_device_queue_family_properties(physical_device, None)
.into_iter()
.enumerate()
.position(|(i, queue_family_properties)| {
queue_family_properties
.queue_flags
.contains(vk::QueueFlags::GRAPHICS)
// (
// .contains(vk::QueueFlags::GRAPHICS)
// && .contains(vk::QueueFlags::TRANSFER)
// )
// .contains(vk::QueueFlags::TRANSFER)
&& instance
.get_physical_device_surface_support_khr(
physical_device,
i as u32,
surface,
)
.unwrap()
}) {
Some(queue_family) => queue_family as u32,
None => return None,
};
let formats = instance
.get_physical_device_surface_formats_khr(physical_device, surface, None)
.unwrap();
let format = match formats
.iter()
.find(|surface_format| {
(surface_format.format == vk::Format::B8G8R8A8_SRGB
|| surface_format.format == vk::Format::R8G8B8A8_SRGB)
&& surface_format.color_space == vk::ColorSpaceKHR::SRGB_NONLINEAR_KHR
})
.or_else(|| formats.get(0))
{
Some(surface_format) => *surface_format,
None => return None,
};
let present_mode = instance
.get_physical_device_surface_present_modes_khr(physical_device, surface, None)
.unwrap()
.into_iter()
.find(|present_mode| present_mode == &vk::PresentModeKHR::MAILBOX_KHR)
.unwrap_or(vk::PresentModeKHR::FIFO_KHR);
let supported_device_extensions = instance
.enumerate_device_extension_properties(physical_device, None, None)
.unwrap();
let device_extensions_supported =
device_extensions.iter().all(|device_extension| {
let device_extension = CStr::from_ptr(*device_extension);
supported_device_extensions.iter().any(|properties| {
CStr::from_ptr(properties.extension_name.as_ptr()) == device_extension
})
});
if !device_extensions_supported {
return None;
}
let device_properties = instance.get_physical_device_properties(physical_device);
Some((
physical_device,
queue_family,
format,
present_mode,
device_properties,
))
})
.max_by_key(|(_, _, _, _, properties)| match properties.device_type {
vk::PhysicalDeviceType::DISCRETE_GPU => 2,
vk::PhysicalDeviceType::INTEGRATED_GPU => 1,
_ => 0,
})
.expect("No suitable physical device found");
let queue_info = vec![vk::DeviceQueueCreateInfoBuilder::new()
.queue_family_index(queue_family)
.queue_priorities(&[1.0])];
let features = vk::PhysicalDeviceFeaturesBuilder::new();
let device_info = vk::DeviceCreateInfoBuilder::new()
.queue_create_infos(&queue_info)
.enabled_features(&features)
.enabled_extension_names(&device_extensions)
.enabled_layer_names(&device_layers);
let device = Arc::new(DeviceLoader::new(&instance, physical_device, &device_info).unwrap());
let queue = device.get_device_queue(queue_family, 0);
let surface_caps = instance.get_physical_device_surface_capabilities_khr(physical_device, surface).unwrap();
let mut image_count = surface_caps.min_image_count + 1;
if surface_caps.max_image_count > 0 && image_count > surface_caps.max_image_count {
image_count = surface_caps.max_image_count;
}
let swapchain_image_extent = match surface_caps.current_extent {
vk::Extent2D {
width: u32::MAX,
height: u32::MAX,
} => {
let PhysicalSize { width, height } = window.inner_size();
vk::Extent2D { width, height }
}
normal => normal,
};
let swapchain_info = vk::SwapchainCreateInfoKHRBuilder::new()
.surface(surface)
.min_image_count(image_count)
.image_format(format.format)
.image_color_space(format.color_space)
.image_extent(swapchain_image_extent)
.image_array_layers(1)
.image_usage(vk::ImageUsageFlags::COLOR_ATTACHMENT)
.image_sharing_mode(vk::SharingMode::EXCLUSIVE)
.pre_transform(surface_caps.current_transform)
.composite_alpha(vk::CompositeAlphaFlagBitsKHR::OPAQUE_KHR)
.present_mode(present_mode)
.clipped(true)
.old_swapchain(vk::SwapchainKHR::null());
let swapchain = device.create_swapchain_khr(&swapchain_info, None).unwrap();
let swapchain_images = device.get_swapchain_images_khr(swapchain, None).unwrap();
let swapchain_image_views: Vec<_> = swapchain_images
.iter()
.map(|swapchain_image| {
let image_view_info = vk::ImageViewCreateInfoBuilder::new()
.image(*swapchain_image)
.view_type(vk::ImageViewType::_2D)
.format(format.format)
.components(vk::ComponentMapping {
r: vk::ComponentSwizzle::IDENTITY,
g: vk::ComponentSwizzle::IDENTITY,
b: vk::ComponentSwizzle::IDENTITY,
a: vk::ComponentSwizzle::IDENTITY,
})
.subresource_range(
vk::ImageSubresourceRangeBuilder::new()
.aspect_mask(vk::ImageAspectFlags::COLOR)
.base_mip_level(0)
.level_count(1)
.base_array_layer(0)
.layer_count(1)
.build(),
);
device.create_image_view(&image_view_info, None).unwrap()
})
.collect();
let command_pool_info = vk::CommandPoolCreateInfoBuilder::new()
.queue_family_index(queue_family)
.flags(vk::CommandPoolCreateFlags::RESET_COMMAND_BUFFER);
let command_pool = device.create_command_pool(&command_pool_info, None).unwrap();
let model_path: &'static str = "assets/terrain__002__.obj";
let (models, materials) = tobj::load_obj(&model_path, &tobj::LoadOptions::default()).expect("Failed to load model object!");
let model = models[0].clone();
let materials = materials.unwrap();
let material = materials.clone().into_iter().nth(0).unwrap();
let mut vertices_terr = vec![];
let mesh = model.mesh;
let total_vertices_count = mesh.positions.len() / 3;
for i in 0..total_vertices_count {
let vertex = VertexV3 {
pos: [
mesh.positions[i * 3],
mesh.positions[i * 3 + 1],
mesh.positions[i * 3 + 2],
1.0,
],
color: [0.8, 0.20, 0.30, 0.40],
};
vertices_terr.push(vertex);
};
let mut indices_terr = mesh.indices.clone();
let physical_device_memory_properties = instance.get_physical_device_memory_properties(physical_device);
let vb_size = ((::std::mem::size_of_val(&(3.14 as f32))) * 9 * vertices_terr.len()) as vk::DeviceSize;
let ib_size = (::std::mem::size_of_val(&(10 as u32)) * indices_terr.len()) as vk::DeviceSize;
let info = vk::BufferCreateInfoBuilder::new()
.size(ib_size)
.usage(vk::BufferUsageFlags::TRANSFER_SRC)
.sharing_mode(vk::SharingMode::EXCLUSIVE);
let sb = device.create_buffer(&info, None).expect("Failed to create a staging buffer.");
let mem_reqs = device.get_buffer_memory_requirements(sb);
let alloc_info = vk::MemoryAllocateInfoBuilder::new()
.allocation_size(mem_reqs.size)
.memory_type_index(2);
let sb_mem = device.allocate_memory(&alloc_info, None).unwrap();
device.bind_buffer_memory(sb, sb_mem, 0).unwrap();
let data_ptr = device.map_memory(
sb_mem,
0,
vk::WHOLE_SIZE,
vk::MemoryMapFlags::empty(),
).unwrap() as *mut u32;
data_ptr.copy_from_nonoverlapping(indices_terr.as_ptr(), indices_terr.len());
device.unmap_memory(sb_mem);
// Todo: add destruction if this is still working
let info = vk::BufferCreateInfoBuilder::new()
.size(ib_size)
.usage(vk::BufferUsageFlags::TRANSFER_DST | vk::BufferUsageFlags::INDEX_BUFFER)
.sharing_mode(vk::SharingMode::EXCLUSIVE);
let ib = device.create_buffer(&info, None)
.expect("Failed to create index buffer.");
let mem_reqs = device.get_buffer_memory_requirements(ib);
let alloc_info = vk::MemoryAllocateInfoBuilder::new()
.allocation_size(mem_reqs.size)
.memory_type_index(1);
let ib_mem = device.allocate_memory(&alloc_info, None).unwrap();
device.bind_buffer_memory(ib, ib_mem, 0);
let info = vk::CommandBufferAllocateInfoBuilder::new()
.command_pool(command_pool)
.level(vk::CommandBufferLevel::PRIMARY)
.command_buffer_count(1);
let cb = device.allocate_command_buffers(&info).unwrap()[0];
let info = vk::CommandBufferBeginInfoBuilder::new()
.flags(vk::CommandBufferUsageFlags::ONE_TIME_SUBMIT);
device.begin_command_buffer(cb, &info).expect("Failed begin_command_buffer.");
let info = vk::BufferCopyBuilder::new()
.src_offset(0)
.dst_offset(0)
.size(ib_size);
device.cmd_copy_buffer(cb, sb, ib, &[info]);
let slice = &[cb];
device.end_command_buffer(cb).expect("Failed to end command buffer.");
let info = vk::SubmitInfoBuilder::new()
.wait_semaphores(&[])
.command_buffers(slice)
.signal_semaphores(&[]);
device.queue_submit(queue, &[info], vk::Fence::null()).expect("Failed to queue submit.");
let info = vk::BufferCreateInfoBuilder::new()
.size(vb_size)
.usage(vk::BufferUsageFlags::TRANSFER_SRC)
.sharing_mode(vk::SharingMode::EXCLUSIVE);
let sb = device.create_buffer(&info, None).expect("Buffer create fail.");
let mem_reqs = device.get_buffer_memory_requirements(sb);
let info = vk::MemoryAllocateInfoBuilder::new()
.allocation_size(mem_reqs.size)
.memory_type_index(2);
let sb_mem = device.allocate_memory(&info, None).unwrap();
device.bind_buffer_memory(sb, sb_mem, 0).expect("Bind memory fail.");
let data_ptr = device.map_memory(
sb_mem,
0,
vk::WHOLE_SIZE,
vk::MemoryMapFlags::empty(),
).unwrap() as *mut VertexV3;
data_ptr.copy_from_nonoverlapping(vertices_terr.as_ptr(), vertices_terr.len());
device.unmap_memory(sb_mem);
let info = vk::BufferCreateInfoBuilder::new()
.size(vb_size)
.usage(vk::BufferUsageFlags::TRANSFER_DST | vk::BufferUsageFlags::VERTEX_BUFFER)
.sharing_mode(vk::SharingMode::EXCLUSIVE);
let vb = device.create_buffer(&info, None).expect("Create buffer fail.");
let mem_reqs = device.get_buffer_memory_requirements(vb);
let info = vk::MemoryAllocateInfoBuilder::new()
.allocation_size(mem_reqs.size)
.memory_type_index(1);
let vb_mem = device.allocate_memory(&info, None).unwrap();
device.bind_buffer_memory(vb, vb_mem, 0).expect("Bind memory fail.");
let info = vk::CommandBufferAllocateInfoBuilder::new()
.command_pool(command_pool)
.level(vk::CommandBufferLevel::PRIMARY)
.command_buffer_count(1);
let cb = device.allocate_command_buffers(&info).unwrap()[0];
let info = vk::CommandBufferBeginInfoBuilder::new()
.flags(vk::CommandBufferUsageFlags::ONE_TIME_SUBMIT);
device.begin_command_buffer(cb, &info).expect("Begin command buffer fail.");
let info = vk::BufferCopyBuilder::new()
.src_offset(0)
.dst_offset(0)
.size(vb_size);
device.cmd_copy_buffer(cb, sb, vb, &[info]);
device.end_command_buffer(cb).expect("End command buffer fail.");
let slice = &[cb];
let info = vk::SubmitInfoBuilder::new()
.wait_semaphores(&[])
.command_buffers(slice)
.signal_semaphores(&[]);
device.queue_submit(queue, &[info], vk::Fence::null()).expect("Queue submit fail.");
let info = vk::DescriptorSetLayoutBindingFlagsCreateInfoBuilder::new()
.binding_flags(&[vk::DescriptorBindingFlags::empty()]);
let samplers = [vk::Sampler::null()];
let binding = vk::DescriptorSetLayoutBindingBuilder::new()
.binding(0)
.descriptor_type(vk::DescriptorType::UNIFORM_BUFFER)
.descriptor_count(1)
.stage_flags(vk::ShaderStageFlags::VERTEX)
.immutable_samplers(&samplers);
let slice = &[binding];
let info = vk::DescriptorSetLayoutCreateInfoBuilder::new()
.flags(vk::DescriptorSetLayoutCreateFlags::empty()) // ? https://docs.rs/erupt/0.22.0+204/erupt/vk1_0/struct.DescriptorSetLayoutCreateFlags.html
.bindings(slice);
let descriptor_set_layout = device.create_descriptor_set_layout(&info, None).unwrap();
let ubo_size = ::std::mem::size_of::<UniformBufferObject>();
let mut uniform_buffers: Vec<vk::Buffer> = vec![];
let mut uniform_buffers_memories: Vec<vk::DeviceMemory> = vec![];
let swapchain_image_count = swapchain_images.len();
for _ in 0..swapchain_image_count {
let (uniform_buffer, uniform_buffer_memory) = create_buffer(
&device,
ubo_size as u64,
vk::BufferUsageFlags::UNIFORM_BUFFER,
2,
);
uniform_buffers.push(uniform_buffer);
uniform_buffers_memories.push(uniform_buffer_memory);
}
let mut uniform_transform = UniformBufferObject {
model:
// Matrix4::from_translation(Vector3::new(5.0, 5.0, 5.0))
Matrix4::from_angle_y(Deg(10.0))
* Matrix4::from_nonuniform_scale(0.5, 0.5, 0.5),
view: Matrix4::look_at_rh(
Point3::new(0.80, 0.80, 0.80),
Point3::new(0.0, 0.0, 0.0),
Vector3::new(0.0, 0.0, 1.0),
),
proj: {
let mut proj = cgmath::perspective(
Deg(30.0),
swapchain_image_extent.width as f32
/ swapchain_image_extent.height as f32,
0.1,
10.0,
);
proj[1][1] = proj[1][1] * -1.0;
proj
},
};
let pool_size = vk::DescriptorPoolSizeBuilder::new()
._type(vk::DescriptorType::UNIFORM_BUFFER)
.descriptor_count(swapchain_image_count as u32);
let pool_sizes = &[pool_size];
let set_layouts = &[descriptor_set_layout];
let pool_info = vk::DescriptorPoolCreateInfoBuilder::new()
.pool_sizes(pool_sizes)
.max_sets(swapchain_image_count as u32);
let desc_pool = device.create_descriptor_pool(&pool_info, None).unwrap();
let d_set_alloc_info = vk::DescriptorSetAllocateInfoBuilder::new()
.descriptor_pool(desc_pool)
.set_layouts(set_layouts);
let d_sets = device.allocate_descriptor_sets(&d_set_alloc_info).expect("failed in alloc DescriptorSet");
let ubo_size = ::std::mem::size_of::<UniformBufferObject>() as u64;
for i in 0..swapchain_image_count {
let d_buf_info = vk::DescriptorBufferInfoBuilder::new()
.buffer(uniform_buffers[i])
.offset(0)
.range(ubo_size);
let d_buf_infos = [d_buf_info];
let d_write_builder = vk::WriteDescriptorSetBuilder::new()
.dst_set(d_sets[0])
.dst_binding(0)
.dst_array_element(0)
.descriptor_type(vk::DescriptorType::UNIFORM_BUFFER)
.buffer_info(&d_buf_infos);
let d_write_sets = [d_write_builder];
device.update_descriptor_sets(&d_write_sets, &[]);
update_uniform_buffer
(
&device,
&mut uniform_transform,
&mut uniform_buffers_memories,
&mut uniform_buffers,
i as usize,
2.3
);
}
let depth_image_info = vk::ImageCreateInfoBuilder::new()
.flags(vk::ImageCreateFlags::empty())
.image_type(vk::ImageType::_2D)
.format(vk::Format::D32_SFLOAT)
.extent(vk::Extent3D {
width: swapchain_image_extent.width,
height: swapchain_image_extent.height,
depth: 1,
})
.mip_levels(1)
.array_layers(1)
.samples(vk::SampleCountFlagBits::_1)
.tiling(vk::ImageTiling::OPTIMAL)
.usage(vk::ImageUsageFlags::DEPTH_STENCIL_ATTACHMENT)
.sharing_mode(vk::SharingMode::EXCLUSIVE)
.queue_family_indices(&[0])
.initial_layout(vk::ImageLayout::UNDEFINED);
let depth_image = device.create_image(&depth_image_info, None)
.expect("Failed to create depth (texture) Image.");
let dpth_img_mem_reqs = device.get_image_memory_requirements(depth_image);
let dpth_img_mem_info = vk::MemoryAllocateInfoBuilder::new()
.memory_type_index(1)
.allocation_size(dpth_img_mem_reqs.size);
let depth_image_memory = device.allocate_memory(&dpth_img_mem_info, None)
.expect("Failed to alloc mem for depth image.");
device.bind_image_memory(depth_image, depth_image_memory, 0)
.expect("Failed to bind depth image memory.");
let depth_image_view_info = vk::ImageViewCreateInfoBuilder::new()
.flags(vk::ImageViewCreateFlags::empty())
.image(depth_image)
.view_type(vk::ImageViewType::_2D)
.format(vk::Format::D32_SFLOAT)
.components(vk::ComponentMapping {
r: vk::ComponentSwizzle::IDENTITY,
g: vk::ComponentSwizzle::IDENTITY,
b: vk::ComponentSwizzle::IDENTITY,
a: vk::ComponentSwizzle::IDENTITY,
})
.subresource_range(vk::ImageSubresourceRange {
aspect_mask: vk::ImageAspectFlags::DEPTH,
base_mip_level: 0,
level_count: 1,
base_array_layer: 0,
layer_count: 1,
});
let depth_image_view = device.create_image_view(&depth_image_view_info, None)
.expect("Failed to create image view.");
let entry_point = CString::new("main").unwrap();
let vert_decoded = utils::decode_spv(SHADER_VERT).unwrap();
let module_info = vk::ShaderModuleCreateInfoBuilder::new().code(&vert_decoded);
let shader_vert = unsafe { device.create_shader_module(&module_info, None) }.unwrap();
let frag_decoded = utils::decode_spv(SHADER_FRAG).unwrap();
let module_info = vk::ShaderModuleCreateInfoBuilder::new().code(&frag_decoded);
let shader_frag = unsafe { device.create_shader_module(&module_info, None) }.unwrap();
let shader_stages = vec![
vk::PipelineShaderStageCreateInfoBuilder::new()
.stage(vk::ShaderStageFlagBits::VERTEX)
.module(shader_vert)
.name(&entry_point),
vk::PipelineShaderStageCreateInfoBuilder::new()
.stage(vk::ShaderStageFlagBits::FRAGMENT)
.module(shader_frag)
.name(&entry_point),
];
let vertex_buffer_bindings_desc_info = vk::VertexInputBindingDescriptionBuilder::new()
.binding(0)
.stride(std::mem::size_of::<VertexV3>() as u32)
.input_rate(vk::VertexInputRate::VERTEX);
let vert_buff_att_desc_info_pos = vk::VertexInputAttributeDescriptionBuilder::new()
.location(0)
.binding(0)
.format(vk::Format::R32G32B32A32_SFLOAT)
.offset(offset_of!(VertexV3, pos) as u32,);
let vert_buff_att_desc_info_color = vk::VertexInputAttributeDescriptionBuilder::new()
.location(1)
.binding(0)
.format(vk::Format::R32G32B32A32_SFLOAT)
.offset(offset_of!(VertexV3, color) as u32,);
let vertex_input = vk::PipelineVertexInputStateCreateInfoBuilder::new()
.flags(vk::PipelineVertexInputStateCreateFlags::empty())
.vertex_binding_descriptions(&[vertex_buffer_bindings_desc_info])
.vertex_attribute_descriptions(&[vert_buff_att_desc_info_pos, vert_buff_att_desc_info_color])
.build_dangling();
let input_assembly = vk::PipelineInputAssemblyStateCreateInfoBuilder::new()
.topology(vk::PrimitiveTopology::TRIANGLE_LIST)
.primitive_restart_enable(false);
let viewports = vec![vk::ViewportBuilder::new()
.x(0.0)
.y(0.0)
.width(swapchain_image_extent.width as f32)
.height(swapchain_image_extent.height as f32)
.min_depth(0.0)
.max_depth(1.0)];
let scissors = vec![vk::Rect2DBuilder::new()
.offset(vk::Offset2D { x: 0, y: 0 })
.extent(swapchain_image_extent)];
let viewport_state = vk::PipelineViewportStateCreateInfoBuilder::new()
.viewports(&viewports)
.scissors(&scissors);
let rasterizer = vk::PipelineRasterizationStateCreateInfoBuilder::new()
.depth_clamp_enable(true)
.rasterizer_discard_enable(false)
.polygon_mode(vk::PolygonMode::LINE)
.line_width(1.0)
.cull_mode(vk::CullModeFlags::NONE)
.front_face(vk::FrontFace::COUNTER_CLOCKWISE);
let multisampling = vk::PipelineMultisampleStateCreateInfoBuilder::new()
.sample_shading_enable(false)
.rasterization_samples(vk::SampleCountFlagBits::_1);
let color_blend_attachments = vec![vk::PipelineColorBlendAttachmentStateBuilder::new()
.color_write_mask(
vk::ColorComponentFlags::R
| vk::ColorComponentFlags::G
| vk::ColorComponentFlags::B
| vk::ColorComponentFlags::A,
)
.blend_enable(false)];
let color_blending = vk::PipelineColorBlendStateCreateInfoBuilder::new()
.logic_op_enable(false)
.attachments(&color_blend_attachments);
let pipeline_stencil_info = vk::PipelineDepthStencilStateCreateInfoBuilder::new()
.depth_test_enable(false)
.depth_write_enable(true)
.depth_compare_op(vk::CompareOp::LESS)
.depth_bounds_test_enable(false)
.min_depth_bounds(0.0)
.max_depth_bounds(1.0)
.front(vk::StencilOpStateBuilder::new().build())
.back(vk::StencilOpStateBuilder::new().build());
let desc_layouts_slc = &[descriptor_set_layout];
let pipeline_layout_info = vk::PipelineLayoutCreateInfoBuilder::new()
.set_layouts(desc_layouts_slc);
let pipeline_layout = device.create_pipeline_layout(&pipeline_layout_info, None).unwrap();
let attachments = vec![
vk::AttachmentDescriptionBuilder::new()
.format(format.format)
.samples(vk::SampleCountFlagBits::_1)
.load_op(vk::AttachmentLoadOp::CLEAR)
.store_op(vk::AttachmentStoreOp::STORE)
.stencil_load_op(vk::AttachmentLoadOp::DONT_CARE)
.stencil_store_op(vk::AttachmentStoreOp::DONT_CARE)
.initial_layout(vk::ImageLayout::UNDEFINED)
.final_layout(vk::ImageLayout::PRESENT_SRC_KHR),
vk::AttachmentDescriptionBuilder::new()
.format(vk::Format::D32_SFLOAT)
.samples(vk::SampleCountFlagBits::_1)
.load_op(vk::AttachmentLoadOp::CLEAR)
.store_op(vk::AttachmentStoreOp::DONT_CARE)
.stencil_load_op(vk::AttachmentLoadOp::DONT_CARE)
.stencil_store_op(vk::AttachmentStoreOp::DONT_CARE)
.initial_layout(vk::ImageLayout::UNDEFINED)
.final_layout(vk::ImageLayout::DEPTH_STENCIL_ATTACHMENT_OPTIMAL)
];
let depth_attach_ref = vk::AttachmentReferenceBuilder::new()
.attachment(1)
.layout(vk::ImageLayout::DEPTH_STENCIL_ATTACHMENT_OPTIMAL);
let color_attachment_refs = vec![vk::AttachmentReferenceBuilder::new()
.attachment(0)
.layout(vk::ImageLayout::COLOR_ATTACHMENT_OPTIMAL)];
let subpasses = vec![vk::SubpassDescriptionBuilder::new()
.pipeline_bind_point(vk::PipelineBindPoint::GRAPHICS)
.color_attachments(&color_attachment_refs)
.depth_stencil_attachment(&depth_attach_ref)];
let dependencies = vec![vk::SubpassDependencyBuilder::new()
.src_subpass(vk::SUBPASS_EXTERNAL)
.dst_subpass(0)
.src_stage_mask(vk::PipelineStageFlags::COLOR_ATTACHMENT_OUTPUT)
.src_access_mask(vk::AccessFlags::empty())
.dst_stage_mask(vk::PipelineStageFlags::COLOR_ATTACHMENT_OUTPUT)
.dst_access_mask(vk::AccessFlags::COLOR_ATTACHMENT_WRITE)];
let render_pass_info = vk::RenderPassCreateInfoBuilder::new()
.attachments(&attachments)
.subpasses(&subpasses)
.dependencies(&dependencies);
let render_pass = unsafe { device.create_render_pass(&render_pass_info, None) }.unwrap();
let pipeline_info = vk::GraphicsPipelineCreateInfoBuilder::new()
.stages(&shader_stages)
.vertex_input_state(&vertex_input)
.input_assembly_state(&input_assembly)
.depth_stencil_state(&pipeline_stencil_info)
.viewport_state(&viewport_state)
.rasterization_state(&rasterizer)
.multisample_state(&multisampling)
.color_blend_state(&color_blending)
.layout(pipeline_layout)
.render_pass(render_pass)
.subpass(0);
let pipeline = unsafe {
device.create_graphics_pipelines(vk::PipelineCache::null(), &[pipeline_info], None)
}
.unwrap()[0];
let swapchain_framebuffers: Vec<_> = swapchain_image_views
.iter()
.map(|image_view| {
let attachments = vec![*image_view, depth_image_view];
let framebuffer_info = vk::FramebufferCreateInfoBuilder::new()
.render_pass(render_pass)
.attachments(&attachments)
.width(swapchain_image_extent.width)
.height(swapchain_image_extent.height)
.layers(1);
unsafe { device.create_framebuffer(&framebuffer_info, None) }.unwrap()
})
.collect();
let cmd_buf_allocate_info = vk::CommandBufferAllocateInfoBuilder::new()
.command_pool(command_pool)
.level(vk::CommandBufferLevel::PRIMARY)
.command_buffer_count(swapchain_framebuffers.len() as _);
let cmd_bufs = unsafe { device.allocate_command_buffers(&cmd_buf_allocate_info) }.unwrap();
for (&cmd_buf, &framebuffer) in cmd_bufs.iter().zip(swapchain_framebuffers.iter()) {
let cmd_buf_begin_info = vk::CommandBufferBeginInfoBuilder::new();
unsafe { device.begin_command_buffer(cmd_buf, &cmd_buf_begin_info) }.unwrap();
let clear_values = vec![
vk::ClearValue {
color: vk::ClearColorValue {
float32: [0.0, 0.0, 0.0, 1.0],
},
},
vk::ClearValue {
depth_stencil: vk::ClearDepthStencilValue {
depth: 1.0,
stencil: 0,
},
},
];
let render_pass_begin_info = vk::RenderPassBeginInfoBuilder::new()
.render_pass(render_pass)
.framebuffer(framebuffer)
.render_area(vk::Rect2D {
offset: vk::Offset2D { x: 0, y: 0 },
extent: swapchain_image_extent,
})
.clear_values(&clear_values);
unsafe {
device.cmd_begin_render_pass(
cmd_buf,
&render_pass_begin_info,
vk::SubpassContents::INLINE,
);
device.cmd_bind_pipeline(cmd_buf, vk::PipelineBindPoint::GRAPHICS, pipeline);
device.cmd_bind_index_buffer(cmd_buf, ib, 0, vk::IndexType::UINT32);
device.cmd_bind_vertex_buffers(cmd_buf, 0, &[vb], &[0]);
device.cmd_bind_descriptor_sets(cmd_buf, vk::PipelineBindPoint::GRAPHICS, pipeline_layout, 0, &d_sets, &[]);
device.cmd_draw_indexed(cmd_buf, (indices_terr.len()) as u32, ((indices_terr.len()) / 3) as u32, 0, 0, 0);
device.cmd_end_render_pass(cmd_buf);
device.end_command_buffer(cmd_buf).unwrap();
}
}
let semaphore_info = vk::SemaphoreCreateInfoBuilder::new();
let image_available_semaphores: Vec<_> = (0..FRAMES_IN_FLIGHT)
.map(|_| unsafe { device.create_semaphore(&semaphore_info, None) }.unwrap())
.collect();
let render_finished_semaphores: Vec<_> = (0..FRAMES_IN_FLIGHT)
.map(|_| unsafe { device.create_semaphore(&semaphore_info, None) }.unwrap())
.collect();
let fence_info = vk::FenceCreateInfoBuilder::new().flags(vk::FenceCreateFlags::SIGNALED);
let in_flight_fences: Vec<_> = (0..FRAMES_IN_FLIGHT)
.map(|_| unsafe { device.create_fence(&fence_info, None) }.unwrap())
.collect();
let mut images_in_flight: Vec<_> = swapchain_images.iter().map(|_| vk::Fence::null()).collect();
let mut frame = 0;
#[allow(clippy::collapsible_match, clippy::single_match)]
event_loop.run(move |event, _, control_flow| match event {
Event::NewEvents(StartCause::Init) => {
*control_flow = ControlFlow::Poll;
}
Event::WindowEvent { event, .. } => match event {
WindowEvent::CloseRequested => *control_flow = ControlFlow::Exit,
_ => (),
},
Event::DeviceEvent { event, .. } => match event {
DeviceEvent::Key(KeyboardInput {
virtual_keycode: Some(keycode),
state,
..
}) => match (keycode, state) {
(VirtualKeyCode::Escape, ElementState::Released) => {
*control_flow = ControlFlow::Exit
}
_ => (),
},
_ => (),
},
Event::MainEventsCleared => {
unsafe {
device
.wait_for_fences(&[in_flight_fences[frame]], true, u64::MAX)
.unwrap();
}
let image_index = unsafe {
device.acquire_next_image_khr(
swapchain,
u64::MAX,
image_available_semaphores[frame],
vk::Fence::null(),
)
}
.unwrap();
update_uniform_buffer(&device, &mut uniform_transform, &mut uniform_buffers_memories, &mut uniform_buffers, image_index as usize, 3.2);
let image_in_flight = images_in_flight[image_index as usize];
if !image_in_flight.is_null() {
unsafe { device.wait_for_fences(&[image_in_flight], true, u64::MAX) }.unwrap();
}
images_in_flight[image_index as usize] = in_flight_fences[frame];
let wait_semaphores = vec![image_available_semaphores[frame]];
let command_buffers = vec![cmd_bufs[image_index as usize]];
let signal_semaphores = vec![render_finished_semaphores[frame]];
let submit_info = vk::SubmitInfoBuilder::new()
.wait_semaphores(&wait_semaphores)
.wait_dst_stage_mask(&[vk::PipelineStageFlags::COLOR_ATTACHMENT_OUTPUT])
.command_buffers(&command_buffers)
.signal_semaphores(&signal_semaphores);
unsafe {
let in_flight_fence = in_flight_fences[frame];
device.reset_fences(&[in_flight_fence]).unwrap();
device
.queue_submit(queue, &[submit_info], in_flight_fence)
.unwrap()
}
let swapchains = vec![swapchain];
let image_indices = vec![image_index];
let present_info = vk::PresentInfoKHRBuilder::new()
.wait_semaphores(&signal_semaphores)
.swapchains(&swapchains)
.image_indices(&image_indices);
unsafe { device.queue_present_khr(queue, &present_info) }.unwrap();
frame = (frame + 1) % FRAMES_IN_FLIGHT;
}
// Event::LoopDestroyed => unsafe {
// device.device_wait_idle().unwrap();
// for &semaphore in image_available_semaphores
// .iter()
// .chain(render_finished_semaphores.iter())
// {
// device.destroy_semaphore(semaphore, None);
// }
// for &fence in &in_flight_fences {
// device.destroy_fence(fence, None);
// }
// device.destroy_command_pool(command_pool, None);
// for &framebuffer in &swapchain_framebuffers {
// device.destroy_framebuffer(framebuffer, None);
// }
// device.destroy_pipeline(pipeline, None);
// device.destroy_render_pass(render_pass, None);
// device.destroy_pipeline_layout(pipeline_layout, None);
// device.destroy_shader_module(shader_vert, None);
// device.destroy_shader_module(shader_frag, None);
// for &image_view in &swapchain_image_views {
// device.destroy_image_view(image_view, None);
// }
// device.destroy_swapchain_khr(swapchain, None);
// device.destroy_device(None);
// instance.destroy_surface_khr(surface, None);
// if !messenger.is_null() {
// instance.destroy_debug_utils_messenger_ext(messenger, None);
// }
// instance.destroy_instance(None);
// println!("Exited cleanly");
// },
_ => (),
})
}
fn update_uniform_buffer(
device: &DeviceLoader,
uniform_transform: &mut UniformBufferObject,
ubo_mems: &mut Vec<vk::DeviceMemory>,
ubos: &mut Vec<vk::Buffer>,
current_image: usize,
delta_time: f32
)
{
uniform_transform.model =
Matrix4::from_axis_angle(Vector3::new(1.0, 0.0, 0.0), Deg(0.110) * delta_time)
* uniform_transform.model;
let uni_transform_slice = [uniform_transform.clone()];
let buffer_size = (std::mem::size_of::<UniformBufferObject>() * uni_transform_slice.len()) as u64;
unsafe {
let data_ptr =
device.map_memory(
ubo_mems[current_image],
0,
buffer_size,
vk::MemoryMapFlags::empty(),
).expect("Failed to map memory.") as *mut UniformBufferObject;
data_ptr.copy_from_nonoverlapping(uni_transform_slice.as_ptr(), uni_transform_slice.len());
device.unmap_memory(ubo_mems[current_image]);
}
}
fn create_buffer(
device: &DeviceLoader,
// flags: vk::BufferCreateFlags,
size: vk::DeviceSize,
usage: vk::BufferUsageFlags,
memory_type_index: u32,
// queue_family_indices: &[u32],
) -> (vk::Buffer, vk::DeviceMemory) {
let buffer_create_info = vk::BufferCreateInfoBuilder::new()
// .flags(&[])
.size(size)
.usage(usage)
.sharing_mode(vk::SharingMode::EXCLUSIVE)
.queue_family_indices(&[0]);
let buffer = unsafe {
device.create_buffer(&buffer_create_info, None)
.expect("Failed to create buffer.")
};
let mem_reqs = unsafe { device.get_buffer_memory_requirements(buffer) };
let allocate_info = vk::MemoryAllocateInfoBuilder::new()
.allocation_size(mem_reqs.size)
.memory_type_index(memory_type_index);
let buffer_memory = unsafe {
device
.allocate_memory(&allocate_info, None)
.expect("Failed to allocate memory for buffer.")
};
unsafe {
device
.bind_buffer_memory(buffer, buffer_memory, 0)
.expect("Failed to bind buffer.")
};
(buffer, buffer_memory)
}
// Vulkan Scratch Routine 5600 ----------------- -- -- -- ---- -
#![allow(dead_code, unused_variables, clippy::too_many_arguments, clippy::unnecessary_wraps)]
use erupt::{
cstr,
utils::self,
vk, DeviceLoader, EntryLoader, InstanceLoader,
vk::{Device, MemoryMapFlags},
};
use cgmath::{Deg, Rad, Matrix4, Point3, Vector3, Vector4};
use std::{
ffi::{c_void, CStr, CString},
fs,
fs::{write, OpenOptions},
io::{stdin, stdout, Write},
io::prelude::*,
mem::*,
os::raw::c_char,
ptr,
result::Result,
result::Result::*,
string::String,
sync::{Arc, Mutex},
thread,
time,
};
use libc;
use smallvec::SmallVec;
use raw_window_handle::{HasRawWindowHandle, RawWindowHandle};
use memoffset::offset_of;
use simple_logger::SimpleLogger;
use winit::{
dpi::PhysicalSize,
event::{
Event, KeyboardInput, WindowEvent,
ElementState, StartCause, VirtualKeyCode,
DeviceEvent,
},
monitor::{MonitorHandle, VideoMode},
event_loop::{ControlFlow, EventLoop},
window::{Fullscreen, Window, WindowBuilder}
};
use structopt::StructOpt;
use std::ptr::copy_nonoverlapping as memcpy;
use nalgebra_glm as glm;
const VALIDATION_ENABLED: bool = true;
const MAX_FRAMES_IN_FLIGHT: usize = 2;
const TITLE: &str = "Vulkan-Routine-5600";
const FRAMES_IN_FLIGHT: usize = 2;
const LAYER_KHRONOS_VALIDATION: *const c_char = cstr!("VK_LAYER_KHRONOS_validation");
const SHADER_VERT: &[u8] = include_bytes!("../spv/s_400_.vert.spv");
const SHADER_FRAG: &[u8] = include_bytes!("../spv/s1.frag.spv");
pub unsafe fn vulkan_routine_5600() {
println!("33333");
let event_loop = EventLoop::new();
let window = WindowBuilder::new()
.with_title(TITLE)
// .with_maximized(true)
.with_resizable(false)
.build(&event_loop)
.unwrap();
let mut app = unsafe { App::create(&window) }.unwrap();
let mut destroying = false;
let mut minimized = false;
event_loop.run(move |event, _, control_flow| {
*control_flow = ControlFlow::Poll;
match event {
// Render a frame if our Vulkan app is not being destroyed.
Event::MainEventsCleared if !destroying && !minimized => unsafe { app.render(&window) }.unwrap(),
// Mark the window as having been resized.
Event::WindowEvent { event: WindowEvent::Resized(size), .. } => {
if size.width == 0 || size.height == 0 {
minimized = true;
} else {
minimized = false;
app.resized = true;
}
}
// Destroy our Vulkan app.
Event::WindowEvent { event: WindowEvent::CloseRequested, .. } => {
destroying = true;
*control_flow = ControlFlow::Exit;
unsafe { app.destroy(); }
}
// Handle keyboard events.
Event::WindowEvent { event: WindowEvent::KeyboardInput { input, .. }, .. } => {
}
_ => {}
}
});
}
struct App {
device: Arc<erupt::DeviceLoader>,
instance: Arc<erupt::InstanceLoader>,
entry: Arc<erupt::EntryLoader>,
data: AppData,
frame: usize,
resized: bool,
start: std::time::Instant,
models: usize,
}
impl App {
unsafe fn create
<'a>
(window: &Window)
-> Result<Self, &'a str>
{
let mut data = AppData::default();
let opt = Opt { validation_layers: true };
let entry = Arc::new(EntryLoader::new().unwrap());
println!(
"\n\n{} - Vulkan Instance {}.{}.{}",
TITLE,
vk::api_version_major(entry.instance_version()),
vk::api_version_minor(entry.instance_version()),
vk::api_version_patch(entry.instance_version())
);
let instance = create_instance(window, &entry, &mut data).unwrap();
data.surface = erupt::utils::surface::create_surface(&instance, &window, None).unwrap();
let event_loop = EventLoop::new();
let application_name = CString::new("Vulkan-Routine-4500").unwrap();
let engine_name = CString::new("No Engine").unwrap();
let app_info = vk::ApplicationInfoBuilder::new()
.application_name(&application_name)
.application_version(vk::make_api_version(0, 1, 0, 0))
.engine_name(&engine_name)
.engine_version(vk::make_api_version(0, 1, 0, 0))
.api_version(vk::make_api_version(0, 1, 0, 0));
let device_extensions = vec![
vk::KHR_SWAPCHAIN_EXTENSION_NAME,
vk::KHR_RAY_TRACING_PIPELINE_EXTENSION_NAME,
vk::KHR_RAY_QUERY_EXTENSION_NAME,
vk::KHR_DEFERRED_HOST_OPERATIONS_EXTENSION_NAME,
vk::KHR_ACCELERATION_STRUCTURE_EXTENSION_NAME,
vk::KHR_SHADER_FLOAT_CONTROLS_EXTENSION_NAME,
vk::KHR_SPIRV_1_4_EXTENSION_NAME,
vk::KHR_BUFFER_DEVICE_ADDRESS_EXTENSION_NAME,
vk::EXT_DESCRIPTOR_INDEXING_EXTENSION_NAME,
];
let mut device_layers = Vec::new();
if opt.validation_layers {
println!("\nPushing to device_layers from validation.");
device_layers.push(LAYER_KHRONOS_VALIDATION);
}
let messenger = if opt.validation_layers {
let messenger_info = vk::DebugUtilsMessengerCreateInfoEXTBuilder::new()
.message_severity(
vk::DebugUtilsMessageSeverityFlagsEXT::VERBOSE_EXT
| vk::DebugUtilsMessageSeverityFlagsEXT::WARNING_EXT
| vk::DebugUtilsMessageSeverityFlagsEXT::ERROR_EXT,
)
.message_type(
vk::DebugUtilsMessageTypeFlagsEXT::GENERAL_EXT
| vk::DebugUtilsMessageTypeFlagsEXT::VALIDATION_EXT
| vk::DebugUtilsMessageTypeFlagsEXT::PERFORMANCE_EXT,
)
.pfn_user_callback(Some(debug_callback));
instance.create_debug_utils_messenger_ext(&messenger_info, None).unwrap()
} else {
Default::default()
};
let (physical_device, queue_family, format, present_mode, device_properties) = instance.enumerate_physical_devices(None)
.unwrap()
.into_iter()
.filter_map(|physical_device| unsafe {
let queue_family = match instance
.get_physical_device_queue_family_properties(physical_device, None)
.into_iter()
.enumerate()
.position(|(i, queue_family_properties)| {
queue_family_properties
.queue_flags
.contains(vk::QueueFlags::GRAPHICS)
&& instance
.get_physical_device_surface_support_khr(
physical_device,
i as u32,
data.surface,
)
.unwrap()
}) {
Some(queue_family) => queue_family as u32,
None => return None,
};
let formats = instance
.get_physical_device_surface_formats_khr(physical_device, data.surface, None)
.unwrap();
let format = match formats
.iter()
.find(|surface_format| {
(surface_format.format == vk::Format::B8G8R8A8_SRGB
|| surface_format.format == vk::Format::R8G8B8A8_SRGB)
&& surface_format.color_space == vk::ColorSpaceKHR::SRGB_NONLINEAR_KHR
})
.or_else(|| formats.get(0))
{
Some(surface_format) => *surface_format,
None => return None,
};
let present_mode = instance
.get_physical_device_surface_present_modes_khr(physical_device, data.surface, None)
.unwrap()
.into_iter()
.find(|present_mode| present_mode == &vk::PresentModeKHR::MAILBOX_KHR)
.unwrap_or(vk::PresentModeKHR::FIFO_KHR);
let supported_device_extensions = instance
.enumerate_device_extension_properties(physical_device, None, None)
.unwrap();
let device_extensions_supported =
device_extensions.iter().all(|device_extension| {
let device_extension = CStr::from_ptr(*device_extension);
supported_device_extensions.iter().any(|properties| {
CStr::from_ptr(properties.extension_name.as_ptr()) == device_extension
})
});
if !device_extensions_supported {
return None;
}
let device_properties = instance.get_physical_device_properties(physical_device);
Some((
physical_device,
queue_family,
format,
present_mode,
device_properties,
))
})
.max_by_key(|(_, _, _, _, properties)| match properties.device_type {
vk::PhysicalDeviceType::DISCRETE_GPU => 2,
vk::PhysicalDeviceType::INTEGRATED_GPU => 1,
_ => 0,
})
.expect("No suitable physical device found");
data.physical_device = physical_device;
data.queue_family = queue_family;
data.format = format;
data.present_mode = present_mode;
data.surface_format_khr = format; // duplicate key, todo fix
// data.device_properties = device_properties;
println!("\n\n\nUsing physical device: {:?}\n\n\n", CStr::from_ptr(device_properties.device_name.as_ptr()));
let queue_info = vec![vk::DeviceQueueCreateInfoBuilder::new()
.queue_family_index(queue_family)
.queue_priorities(&[1.0])];
let features = vk::PhysicalDeviceFeaturesBuilder::new()
.fill_mode_non_solid(true)
.depth_clamp(true);
let device_info = vk::DeviceCreateInfoBuilder::new()
.queue_create_infos(&queue_info)
.enabled_features(&features)
.enabled_extension_names(&device_extensions)
.enabled_layer_names(&device_layers);
let device = Arc::new(DeviceLoader::new(&instance, physical_device, &device_info).unwrap());
create_swapchain
(
window,
&instance,
&device,
&mut data
);
create_swapchain_image_views(&device, &mut data);
create_render_pass(&instance, &device, &mut data);
create_descriptor_set_layout(&device, &mut data);
create_pipeline(&device, &mut data);
create_command_pools(&instance, &device, &mut data);
create_depth_objects(&instance, &device, &mut data);
create_framebuffers(&device, &mut data);
load_model(&mut data);
create_vertex_buffer(&instance, &device, &mut data);
create_index_buffer(&instance, &device, &mut data);
create_uniform_buffers(&instance, &device, &mut data);
create_descriptor_pool(&device, &mut data);
create_descriptor_sets(&device, &mut data);
create_command_buffers(&device, &mut data);
create_sync_objects(&device, &mut data);
Ok(Self {
device,
instance,
entry,
data,
frame: 0,
resized: false,
start: std::time::Instant::now(),
models: 1,
})
}
unsafe fn render
<'a>
(
&mut self,
window: &Window,
)
-> Result<(), &'a str>
{
println!("\n\n1111111\n\n\n");
let in_flight_fence = self.data.in_flight_fences[self.frame];
self.device
.wait_for_fences(&[in_flight_fence], true, u64::MAX)
.unwrap();
let image_index = self.device.acquire_next_image_khr
(
self.data.swapchain,
u64::MAX,
self.data.image_available_semaphores[self.frame],
vk::Fence::null(),
).unwrap();
self.update_command_buffer(image_index as usize).unwrap();
self.update_uniform_buffer(image_index as usize).unwrap();
let image_in_flight = self.data.images_in_flight[image_index as usize];
if !image_in_flight.is_null() {
self.device
.wait_for_fences(&[image_in_flight], true, u64::MAX).unwrap();
}
self.data.images_in_flight[image_index as usize] = in_flight_fence;
let wait_semaphores = &[self.data.image_available_semaphores[self.frame]];
let wait_stages = &[vk::PipelineStageFlags::COLOR_ATTACHMENT_OUTPUT];
let command_buffers = &[self.data.command_buffers[image_index as usize]];
// vec![cmd_bufs[image_index as usize]];
let signal_semaphores = &[self.data.render_finished_semaphores[self.frame]];
let submit_info = vk::SubmitInfoBuilder::new()
.wait_semaphores(wait_semaphores)
.wait_dst_stage_mask(wait_stages)
.command_buffers(command_buffers)
.signal_semaphores(signal_semaphores);
self.device.reset_fences(&[in_flight_fence]).unwrap();
self.device
.queue_submit(self.data.graphics_queue, &[submit_info], in_flight_fence).unwrap();
let swapchain = &[self.data.swapchain];
let image_indices = &[image_index as u32];
let present_info = vk::PresentInfoKHRBuilder::new()
.wait_semaphores(wait_semaphores)
.swapchains(swapchain)
.image_indices(image_indices);
let result = self.device.queue_present_khr
(
self.data.present_queue,
&present_info
).unwrap();
if self.resized {
self.resized = false;
self.recreate_swapchain(window).unwrap();
}
self.frame = (self.frame + 1) % MAX_FRAMES_IN_FLIGHT;
Ok(())
}
#[rustfmt::skip]
unsafe fn update_command_buffer
<'a>
(
&mut self,
image_index: usize,
)
-> Result<(), &'a str>
{
let command_pool = self.data.command_pools[image_index];
self.device.reset_command_pool
(
command_pool,
vk::CommandPoolResetFlags::empty()
).unwrap();
let allocate_info = vk::CommandBufferAllocateInfoBuilder::new()
.command_pool(command_pool)
.level(vk::CommandBufferLevel::PRIMARY)
.command_buffer_count(1);
let command_buffer = self.device.allocate_command_buffers(&allocate_info).unwrap()[0];
self.data.command_buffers[image_index] = command_buffer;
let info = vk::CommandBufferBeginInfoBuilder::new()
.flags(vk::CommandBufferUsageFlags::ONE_TIME_SUBMIT);
self.device.begin_command_buffer(command_buffer, &info).unwrap();
let render_area = vk::Rect2DBuilder::new()
.offset(vk::Offset2D::default())
.extent(self.data.swapchain_extent)
.build();
let color_clear_value = vk::ClearValue {
color: vk::ClearColorValue {
float32: [0.0, 0.0, 0.0, 1.0],
},
};
let depth_clear_values = vk::ClearValue {
depth_stencil: vk::ClearDepthStencilValue { depth: 1.0, stencil: 0 },
};
let clear_values = &[color_clear_value, depth_clear_values];
let info = vk::RenderPassBeginInfoBuilder::new()
.render_pass(self.data.render_pass)
.framebuffer(self.data.framebuffers[image_index])
.render_area(render_area)
.clear_values(clear_values);
self.device.cmd_begin_render_pass(command_buffer, &info, vk::SubpassContents::SECONDARY_COMMAND_BUFFERS);
let secondary_command_buffers = (0..self.models)
.map(|i| self.update_secondary_command_buffer(image_index, i))
.collect::<Result<Vec<_>, _>>().unwrap();
self.device.cmd_execute_commands(command_buffer, &secondary_command_buffers[..]);
self.device.cmd_end_render_pass(command_buffer);
self.device.end_command_buffer(command_buffer).unwrap();
Ok(())
}
#[rustfmt::skip]
unsafe fn update_secondary_command_buffer
<'a>
(
&mut self,
image_index: usize,
model_index: usize,
)
-> Result<vk::CommandBuffer, &'a str>
{
let allocate_info = vk::CommandBufferAllocateInfoBuilder::new()
.command_pool(self.data.command_pools[image_index])
.level(vk::CommandBufferLevel::SECONDARY)
.command_buffer_count(1);
let command_buffer = self.device.allocate_command_buffers(&allocate_info).unwrap()[0];
let time = self.start.elapsed().as_secs_f32();
let mut view_matrix: Matrix4<f32> = Matrix4::from_angle_y(Deg(0.8 * time)) * Matrix4::from_nonuniform_scale(0.5, 0.5, 0.5);
// let (_, view_matrix_bytes, _) = view_matrix.as_slice().align_to::<u8>();
let view_matrix_bytes: *mut libc::c_void = &mut view_matrix as *mut _ as *mut libc::c_void;
let inheritance_info = vk::CommandBufferInheritanceInfoBuilder::new()
.render_pass(self.data.render_pass)
.subpass(0)
.framebuffer(self.data.framebuffers[image_index]);
let info = vk::CommandBufferBeginInfoBuilder::new()
.flags(vk::CommandBufferUsageFlags::RENDER_PASS_CONTINUE)
.inheritance_info(&inheritance_info);
self.device.begin_command_buffer(command_buffer, &info).unwrap();
self.device.cmd_bind_pipeline(command_buffer, vk::PipelineBindPoint::GRAPHICS, self.data.pipeline);
self.device.cmd_bind_vertex_buffers(command_buffer, 0, &[self.data.vertex_buffer], &[0]);
self.device.cmd_bind_index_buffer(command_buffer, self.data.index_buffer, 0, vk::IndexType::UINT32);
self.device.cmd_bind_descriptor_sets
(
command_buffer,
vk::PipelineBindPoint::GRAPHICS,
self.data.pipeline_layout,
0,
&[self.data.descriptor_sets[image_index]],
&[],
);
let push_size = std::mem::size_of::<Matrix4<f32>>() as u32;
self.device.cmd_push_constants
(
command_buffer,
self.data.pipeline_layout,
vk::ShaderStageFlags::VERTEX,
0,
push_size,
view_matrix_bytes,
);
self.device.cmd_draw_indexed(command_buffer, self.data.indices.len() as u32, (self.data.indices.len() / 3) as u32, 0, 0, 0);
self.device.end_command_buffer(command_buffer).unwrap();
Ok(command_buffer)
}
unsafe fn update_uniform_buffer
<'a>
(&self, image_index: usize)
-> Result<(), &'a str>
{
let mut model = Matrix4::from_angle_y(Deg(10.0)) * Matrix4::from_nonuniform_scale(0.5, 0.5, 0.5);
let mut proj = cgmath::perspective(
Deg(30.0),
self.data.swapchain_image_extent.width as f32
/ self.data.swapchain_image_extent.height as f32,
0.1,
10.0,
);
let ubo = UniformBufferObject { model, proj };
let memory = self.device.map_memory
(
self.data.uniform_buffers_memory[image_index],
0,
size_of::<UniformBufferObject>() as u64,
vk::MemoryMapFlags::empty(),
).unwrap();
memcpy(&ubo, memory.cast(), 1);
self.device.unmap_memory(self.data.uniform_buffers_memory[image_index]);
Ok(())
}
#[rustfmt::skip]
unsafe fn recreate_swapchain
<'a>
(
&mut self,
window: &Window,
)
-> Result<(), &'a str>
{
self.device.device_wait_idle().unwrap();
self.destroy_swapchain();
create_swapchain(window, &self.instance, &self.device, &mut self.data).unwrap();
create_swapchain_image_views(&self.device, &mut self.data).unwrap();
create_render_pass(&self.instance, &self.device, &mut self.data).unwrap();
create_pipeline(&self.device, &mut self.data).unwrap();
// create_color_objects(&self.instance)
create_depth_objects(&self.instance, &self.device, &mut self.data).unwrap();
create_framebuffers(&self.device, &mut self.data).unwrap();
create_uniform_buffers(&self.instance, &self.device, &mut self.data).unwrap();
create_descriptor_pool(&self.device, &mut self.data).unwrap();
create_descriptor_sets(&self.device, &mut self.data).unwrap();
create_command_buffers(&self.device, &mut self.data).unwrap();
self.data.images_in_flight.resize(self.data.swapchain_images.len(), vk::Fence::null());
Ok(())
}
#[rustfmt::skip]
unsafe fn destroy
<'a>
(
&mut self
)
{
self.device.device_wait_idle().unwrap();
self.destroy_swapchain();
self.data.in_flight_fences.iter().for_each(|f| self.device.destroy_fence(*f, None));
self.data.render_finished_semaphores.iter().for_each(|s| self.device.destroy_semaphore(*s, None));
self.data.command_pools.iter().for_each(|p| self.device.destroy_command_pool(*p, None));
self.device.free_memory(self.data.index_buffer_memory, None);
self.device.destroy_buffer(self.data.index_buffer, None);
self.device.free_memory(self.data.vertex_buffer_memory, None);
self.device.destroy_buffer(self.data.vertex_buffer, None);
// self.device.destroy_sampler(self.)
// self.device.destroy_imageview(self.data.)
self.device.destroy_command_pool(self.data.command_pool, None);
self.device.destroy_descriptor_set_layout(self.data.descriptor_set_layouts[0], None);
self.device.destroy_device(None);
self.instance.destroy_surface_khr(self.data.surface, None);
if VALIDATION_ENABLED {
self.instance.destroy_debug_utils_messenger_ext(self.data.messenger, None);
}
self.instance.destroy_instance(None);
}
#[rustfmt::skip]
unsafe fn destroy_swapchain
<'a>
(
&mut self
)
{
self.device.destroy_descriptor_pool(self.data.descriptor_pool, None);
self.data.uniform_buffers_memory.iter().for_each(|m| self.device.free_memory(*m, None));
self.data.uniform_buffers.iter().for_each(|b| self.device.destroy_buffer(*b, None));
self.device.destroy_image_view(self.data.depth_image_view, None);
self.device.free_memory(self.data.depth_image_memory, None);
self.device.destroy_image(self.data.depth_image, None);
// self.device.destroy_image_view(self.data.color)
// self.device.free_memory(color image)
self.data.framebuffers.iter().for_each(|f| self.device.destroy_framebuffer(*f, None));
self.device.destroy_pipeline(self.data.pipeline, None);
self.device.destroy_pipeline_layout(self.data.pipeline_layout, None);
self.device.destroy_render_pass(self.data.render_pass, None);
self.data.swapchain_image_views.iter().for_each(|v| self.device.destroy_image_view(*v, None));
self.device.destroy_swapchain_khr(self.data.swapchain, None);
}
}
#[derive(Clone, Debug, Default)]
struct AppData {
present_mode: erupt::extensions::khr_surface::PresentModeKHR,
messenger: vk::DebugUtilsMessengerEXT,
surface: vk::SurfaceKHR,
physical_device: vk::PhysicalDevice,
msaa_samples: vk::SampleCountFlagBits,
graphics_queue: vk::Queue,
present_queue: vk::Queue,
swapchain_format: vk::Format,
swapchain_extent: vk::Extent2D,
swapchain: vk::SwapchainKHR,
swapchain_images: Vec<vk::Image>,
swapchain_image_views: Vec<vk::ImageView>,
swapchain_image_extent: vk::Extent2D,
render_pass: vk::RenderPass,
descriptor_set_layouts: Vec<vk::DescriptorSetLayout>,
format: erupt::extensions::khr_surface::SurfaceFormatKHR,
surface_format_khr: erupt::extensions::khr_surface::SurfaceFormatKHR,
pipeline_layout: vk::PipelineLayout,
pipeline: vk::Pipeline,
framebuffers: Vec<vk::Framebuffer>,
command_pool: vk::CommandPool,
color_image: vk::Image,
color_image_memory: vk::DeviceMemory,
color_image_view: vk::ImageView,
depth_image: vk::Image,
depth_image_memory: vk::DeviceMemory,
depth_image_view: vk::ImageView,
// mip_levels: u32,
vertices: Vec<VertexV3>,
indices: Vec<u32>,
vertex_buffer: vk::Buffer,
vertex_buffer_memory: vk::DeviceMemory,
index_buffer: vk::Buffer,
index_buffer_memory: vk::DeviceMemory,
uniform_buffers: Vec<vk::Buffer>,
uniform_buffers_memory: Vec<vk::DeviceMemory>,
descriptor_pool: vk::DescriptorPool,
descriptor_sets: Vec<vk::DescriptorSet>,
image_available_semaphores: Vec<vk::Semaphore>,
render_finished_semaphores: Vec<vk::Semaphore>,
in_flight_fences: Vec<vk::Fence>,
images_in_flight: Vec<vk::Fence>,
command_pools: Vec<vk::CommandPool>,
command_buffers: Vec<vk::CommandBuffer>,
queue_family: u32,
}
unsafe extern "system" fn debug_callback(
_message_severity: vk::DebugUtilsMessageSeverityFlagBitsEXT,
_message_types: vk::DebugUtilsMessageTypeFlagsEXT,
p_callback_data: *const vk::DebugUtilsMessengerCallbackDataEXT,
_p_user_data: *mut c_void,
) -> vk::Bool32 {
let str_99 = String::from(CStr::from_ptr((*p_callback_data).p_message).to_string_lossy());
// log_300.push(str_99 );
eprintln!(
"{}",
CStr::from_ptr((*p_callback_data).p_message).to_string_lossy()
);
vk::FALSE
}
unsafe fn create_buffer
<'a>
(
instance: &erupt::InstanceLoader,
device: &erupt::DeviceLoader,
size: vk::DeviceSize,
usage: vk::BufferUsageFlags,
memory_type_index: u32,
)
-> Result<(vk::Buffer, vk::DeviceMemory), &'a str>
{
let buffer_info = vk::BufferCreateInfoBuilder::new()
.size(size)
.usage(usage)
.sharing_mode(vk::SharingMode::EXCLUSIVE);
let buffer = device.create_buffer(&buffer_info, None).unwrap();
let requirements = device.get_buffer_memory_requirements(buffer);
let memory_info = vk::MemoryAllocateInfoBuilder::new()
.allocation_size(requirements.size)
.memory_type_index(memory_type_index);
let buffer_memory = device.allocate_memory(&memory_info, None).unwrap();
device.bind_buffer_memory(buffer, buffer_memory, 0).unwrap();
Ok((buffer, buffer_memory))
}
// Enumerate monitors and prompt user to choose one
fn prompt_for_monitor
(event_loop: &EventLoop<()>)
-> MonitorHandle
{
let monitor = event_loop
.available_monitors()
.nth(0)
.expect("Please enter a valid ID");
monitor
}
fn prompt_for_video_mode
(monitor: &MonitorHandle)
-> VideoMode
{
let video_mode = monitor
.video_modes()
.nth(0)
.expect("Please enter a valid ID");
video_mode
}
unsafe fn create_command_pool
<'a>
(
instance: &erupt::InstanceLoader,
device: &erupt::DeviceLoader,
data: &AppData,
)
-> Result<vk::CommandPool, &'a str> {
let info = vk::CommandPoolCreateInfoBuilder::new()
.flags(vk::CommandPoolCreateFlags::TRANSIENT)
// .flags(vk::CommandPoolCreateFlags::RESET_COMMAND_BUFFER)
.queue_family_index(0);
Ok(device.create_command_pool(&info, None).unwrap())
}
unsafe fn create_command_pools
<'a>
(
instance: &erupt::InstanceLoader,
device: &erupt::DeviceLoader,
data: &mut AppData,
)
-> Result<(), &'a str>
{
data.command_pool = create_command_pool(instance, device, data).unwrap();
let num_images = data.swapchain_images.len();
for _ in 0..num_images {
let command_pool = create_command_pool(instance, device, data).unwrap();
data.command_pools.push(command_pool);
}
Ok(())
}
unsafe fn create_instance
<'a>
(
window: &Window,
entry: &erupt::EntryLoader,
data: &mut AppData
)
-> Result<Arc<erupt::InstanceLoader>, &'a str>
{
let opt = Opt { validation_layers: true };
let application_name = CString::new("Vulkan-Routine-4500").unwrap();
let engine_name = CString::new("No Engine").unwrap();
let app_info = vk::ApplicationInfoBuilder::new()
.application_name(&application_name)
.application_version(vk::make_api_version(0, 1, 0, 0))
.engine_name(&engine_name)
.engine_version(vk::make_api_version(0, 1, 0, 0))
.api_version(vk::make_api_version(0, 1, 0, 0));
let mut instance_extensions = erupt::utils::surface::enumerate_required_extensions(&window).unwrap();
if opt.validation_layers {
println!("\nPushing to instance extensions from validations.");
instance_extensions.push(vk::EXT_DEBUG_UTILS_EXTENSION_NAME);
}
let mut instance_layers = Vec::new();
if opt.validation_layers {
println!("\nPushing to instance layers from validation.");
instance_layers.push(LAYER_KHRONOS_VALIDATION);
}
let instance_info = vk::InstanceCreateInfoBuilder::new()
.application_info(&app_info)
.enabled_extension_names(&instance_extensions)
.enabled_layer_names(&instance_layers);
Ok(Arc::new(InstanceLoader::new(&entry, &instance_info).unwrap()))
}
unsafe fn create_swapchain_image_views
<'a>
(
device: &erupt::DeviceLoader,
data: &mut AppData,
)
-> Result<(), &'a str>
{
data.swapchain_image_views = data.swapchain_images
.iter()
.map(|swapchain_image| {
let image_view_info = vk::ImageViewCreateInfoBuilder::new()
.image(*swapchain_image)
.view_type(vk::ImageViewType::_2D)
.format(data.format.format)
.components(vk::ComponentMapping {
r: vk::ComponentSwizzle::IDENTITY,
g: vk::ComponentSwizzle::IDENTITY,
b: vk::ComponentSwizzle::IDENTITY,
a: vk::ComponentSwizzle::IDENTITY,
})
.subresource_range(
vk::ImageSubresourceRangeBuilder::new()
.aspect_mask(vk::ImageAspectFlags::COLOR)
.base_mip_level(0)
.level_count(1)
.base_array_layer(0)
.layer_count(1)
.build(),
);
device.create_image_view(&image_view_info, None).unwrap()
})
.collect();
Ok(())
}
unsafe fn create_swapchain
<'a>
(
window: &Window,
instance: &erupt::InstanceLoader,
device: &erupt::DeviceLoader,
data: &mut AppData,
)
-> Result<(), &'a str>
{
let queue = device.get_device_queue(data.queue_family, 0);
let surface_caps = instance.get_physical_device_surface_capabilities_khr(data.physical_device, data.surface).unwrap();
let mut image_count = surface_caps.min_image_count + 1;
if surface_caps.max_image_count > 0 && image_count > surface_caps.max_image_count {
image_count = surface_caps.max_image_count;
}
data.swapchain_image_extent = match surface_caps.current_extent {
vk::Extent2D {
width: u32::MAX,
height: u32::MAX,
} => {
let PhysicalSize { width, height } = window.inner_size();
vk::Extent2D { width, height }
}
normal => normal,
};
let swapchain_info = vk::SwapchainCreateInfoKHRBuilder::new()
.surface(data.surface)
.min_image_count(image_count)
.image_format(data.format.format)
.image_color_space(data.format.color_space)
.image_extent(data.swapchain_image_extent)
.image_array_layers(1)
.image_usage(vk::ImageUsageFlags::COLOR_ATTACHMENT)
.image_sharing_mode(vk::SharingMode::EXCLUSIVE)
.pre_transform(surface_caps.current_transform)
.composite_alpha(vk::CompositeAlphaFlagBitsKHR::OPAQUE_KHR)
.present_mode(data.present_mode)
.clipped(true)
.old_swapchain(vk::SwapchainKHR::null());
data.swapchain = device.create_swapchain_khr(&swapchain_info, None).unwrap();
data.swapchain_images = device.get_swapchain_images_khr(data.swapchain, None).unwrap().into_vec();
Ok(())
}
unsafe fn create_render_pass
<'a>
(
instance: &erupt::InstanceLoader,
device: &erupt::DeviceLoader,
data: &mut AppData
)
-> Result<(), &'a str>
{
let attachments = vec![
vk::AttachmentDescriptionBuilder::new()
.format(data.format.format)
.samples(vk::SampleCountFlagBits::_1)
.load_op(vk::AttachmentLoadOp::CLEAR)
.store_op(vk::AttachmentStoreOp::STORE)
.stencil_load_op(vk::AttachmentLoadOp::DONT_CARE)
.stencil_store_op(vk::AttachmentStoreOp::DONT_CARE)
.initial_layout(vk::ImageLayout::UNDEFINED)
.final_layout(vk::ImageLayout::PRESENT_SRC_KHR),
vk::AttachmentDescriptionBuilder::new()
.format(vk::Format::D32_SFLOAT)
.samples(vk::SampleCountFlagBits::_1)
.load_op(vk::AttachmentLoadOp::CLEAR)
.store_op(vk::AttachmentStoreOp::DONT_CARE)
.stencil_load_op(vk::AttachmentLoadOp::DONT_CARE)
.stencil_store_op(vk::AttachmentStoreOp::DONT_CARE)
.initial_layout(vk::ImageLayout::UNDEFINED)
.final_layout(vk::ImageLayout::DEPTH_STENCIL_ATTACHMENT_OPTIMAL)
];
let depth_attach_ref = vk::AttachmentReferenceBuilder::new()
.attachment(1)
.layout(vk::ImageLayout::DEPTH_STENCIL_ATTACHMENT_OPTIMAL);
let color_attachment_refs = vec![vk::AttachmentReferenceBuilder::new()
.attachment(0)
.layout(vk::ImageLayout::COLOR_ATTACHMENT_OPTIMAL)];
let subpasses = vec![vk::SubpassDescriptionBuilder::new()
.pipeline_bind_point(vk::PipelineBindPoint::GRAPHICS)
.color_attachments(&color_attachment_refs)
.depth_stencil_attachment(&depth_attach_ref)];
let dependencies = vec![vk::SubpassDependencyBuilder::new()
.src_subpass(vk::SUBPASS_EXTERNAL)
.dst_subpass(0)
.src_stage_mask(vk::PipelineStageFlags::COLOR_ATTACHMENT_OUTPUT)
.src_access_mask(vk::AccessFlags::empty())
.dst_stage_mask(vk::PipelineStageFlags::COLOR_ATTACHMENT_OUTPUT)
.dst_access_mask(vk::AccessFlags::COLOR_ATTACHMENT_WRITE)];
let render_pass_info = vk::RenderPassCreateInfoBuilder::new()
.attachments(&attachments)
.subpasses(&subpasses)
.dependencies(&dependencies);
data.render_pass = device.create_render_pass(&render_pass_info, None).unwrap();
Ok(())
}
unsafe fn create_descriptor_pool
<'a>
(
device: &erupt::DeviceLoader,
data: &mut AppData,
)
-> Result<(), &'a str>
{
let ubo_size = vk::DescriptorPoolSizeBuilder::new()
._type(vk::DescriptorType::UNIFORM_BUFFER)
.descriptor_count(data.swapchain_images.len() as u32);
let sampler_size = vk::DescriptorPoolSizeBuilder::new()
._type(vk::DescriptorType::COMBINED_IMAGE_SAMPLER)
.descriptor_count(data.swapchain_images.len() as u32);
let pool_sizes = &[ubo_size, sampler_size];
let info = vk::DescriptorPoolCreateInfoBuilder::new()
.pool_sizes(pool_sizes)
.max_sets(data.swapchain_images.len() as u32);
data.descriptor_pool = device.create_descriptor_pool(&info, None).unwrap();
Ok(())
}
unsafe fn create_descriptor_set_layout
<'a>
(
device: &erupt::DeviceLoader,
data: &mut AppData,
)
-> Result<(), &'a str>
{
let samplers = [vk::Sampler::null()];
let sampler_bindings = [vk::DescriptorSetLayoutBindingBuilder::new()
.binding(0)
.descriptor_type(vk::DescriptorType::UNIFORM_BUFFER)
.descriptor_count(1)
.stage_flags(vk::ShaderStageFlags::VERTEX)
.immutable_samplers(&samplers)];
let descriptor_set_layout_info = vk::DescriptorSetLayoutCreateInfoBuilder::new()
.flags(vk::DescriptorSetLayoutCreateFlags::empty())
.bindings(&sampler_bindings);
data.descriptor_set_layouts.push(device.create_descriptor_set_layout(&descriptor_set_layout_info, None).unwrap());
Ok(())
}
unsafe fn create_pipeline
<'a>
(
device: &erupt::DeviceLoader,
data: &mut AppData,
)
-> Result<(), &'a str>
{
let entry_point = CString::new("main").unwrap();
let vert_decoded = utils::decode_spv(SHADER_VERT).unwrap();
let module_info = vk::ShaderModuleCreateInfoBuilder::new().code(&vert_decoded);
let shader_vert = device.create_shader_module(&module_info, None).unwrap();
let frag_decoded = utils::decode_spv(SHADER_FRAG).unwrap();
let module_info = vk::ShaderModuleCreateInfoBuilder::new().code(&frag_decoded);
let shader_frag = device.create_shader_module(&module_info, None).unwrap();
let shader_stages = vec![
vk::PipelineShaderStageCreateInfoBuilder::new()
.stage(vk::ShaderStageFlagBits::VERTEX)
.module(shader_vert)
.name(&entry_point),
vk::PipelineShaderStageCreateInfoBuilder::new()
.stage(vk::ShaderStageFlagBits::FRAGMENT)
.module(shader_frag)
.name(&entry_point),
];
let vertex_buffer_bindings_desc_info = vk::VertexInputBindingDescriptionBuilder::new()
.binding(0)
.stride(std::mem::size_of::<VertexV3>() as u32)
.input_rate(vk::VertexInputRate::VERTEX);
let vert_buff_att_desc_info_pos = vk::VertexInputAttributeDescriptionBuilder::new()
.location(0)
.binding(0)
.format(vk::Format::R32G32B32A32_SFLOAT)
.offset(offset_of!(VertexV3, pos) as u32,);
let vert_buff_att_desc_info_color = vk::VertexInputAttributeDescriptionBuilder::new()
.location(1)
.binding(0)
.format(vk::Format::R32G32B32A32_SFLOAT)
.offset(offset_of!(VertexV3, color) as u32,);
let vertex_input = vk::PipelineVertexInputStateCreateInfoBuilder::new()
.flags(vk::PipelineVertexInputStateCreateFlags::empty())
.vertex_binding_descriptions(&[vertex_buffer_bindings_desc_info])
.vertex_attribute_descriptions(&[vert_buff_att_desc_info_pos, vert_buff_att_desc_info_color])
.build_dangling();
let input_assembly = vk::PipelineInputAssemblyStateCreateInfoBuilder::new()
.topology(vk::PrimitiveTopology::TRIANGLE_LIST)
.primitive_restart_enable(false);
let viewports = vec![vk::ViewportBuilder::new()
.x(0.0)
.y(0.0)
.width(data.swapchain_image_extent.width as f32)
.height(data.swapchain_image_extent.height as f32)
.min_depth(0.0)
.max_depth(1.0)];
let scissors = vec![vk::Rect2DBuilder::new()
.offset(vk::Offset2D { x: 0, y: 0 })
.extent(data.swapchain_image_extent)];
let viewport_state = vk::PipelineViewportStateCreateInfoBuilder::new()
.viewports(&viewports)
.scissors(&scissors);
let rasterizer = vk::PipelineRasterizationStateCreateInfoBuilder::new()
.depth_clamp_enable(true)
.rasterizer_discard_enable(false)
.polygon_mode(vk::PolygonMode::LINE)
.line_width(1.0)
.cull_mode(vk::CullModeFlags::NONE)
.front_face(vk::FrontFace::COUNTER_CLOCKWISE);
let multisampling = vk::PipelineMultisampleStateCreateInfoBuilder::new()
.sample_shading_enable(false)
.rasterization_samples(vk::SampleCountFlagBits::_1);
let color_blend_attachments = vec![vk::PipelineColorBlendAttachmentStateBuilder::new()
.color_write_mask(
vk::ColorComponentFlags::R
| vk::ColorComponentFlags::G
| vk::ColorComponentFlags::B
| vk::ColorComponentFlags::A,
)
.blend_enable(false)];
let color_blending = vk::PipelineColorBlendStateCreateInfoBuilder::new()
.logic_op_enable(false)
.attachments(&color_blend_attachments);
let pipeline_stencil_info = vk::PipelineDepthStencilStateCreateInfoBuilder::new()
.depth_test_enable(false)
.depth_write_enable(true)
.depth_compare_op(vk::CompareOp::LESS)
.depth_bounds_test_enable(false)
.min_depth_bounds(0.0)
.max_depth_bounds(1.0)
.front(vk::StencilOpStateBuilder::new().build())
.back(vk::StencilOpStateBuilder::new().build());
let push_size = std::mem::size_of::<Matrix4<f32>>() as u32;
let push_constants_ranges = [vk::PushConstantRangeBuilder::new()
.stage_flags(vk::ShaderStageFlags::VERTEX)
.offset(0)
.size(push_size)];
let pipeline_layout_info = vk::PipelineLayoutCreateInfoBuilder::new()
.push_constant_ranges(&push_constants_ranges)
.set_layouts(&data.descriptor_set_layouts);
data.pipeline_layout = device.create_pipeline_layout(&pipeline_layout_info, None).unwrap();
// let depth_attach_ref = vk::AttachmentReferenceBuilder::new()
// .attachment(1)
// .layout(vk::ImageLayout::DEPTH_STENCIL_ATTACHMENT_OPTIMAL);
let pipeline_info = vk::GraphicsPipelineCreateInfoBuilder::new()
.stages(&shader_stages)
.vertex_input_state(&vertex_input)
.input_assembly_state(&input_assembly)
.depth_stencil_state(&pipeline_stencil_info)
.viewport_state(&viewport_state)
.rasterization_state(&rasterizer)
.multisample_state(&multisampling)
.color_blend_state(&color_blending)
.layout(data.pipeline_layout)
.render_pass(data.render_pass)
.subpass(0);
data.pipeline = device.create_graphics_pipelines(vk::PipelineCache::null(), &[pipeline_info], None).unwrap()[0];
device.destroy_shader_module(shader_vert, None);
device.destroy_shader_module(shader_frag, None);
Ok(())
}
unsafe fn create_color_objects
<'a>
(
instance: &erupt::InstanceLoader,
device: &erupt::DeviceLoader,
data: &mut AppData,
)
-> Result<(), &'a str> {
// We don't have one of these in our pipeline atm.
Ok(())
}
unsafe fn create_depth_objects
<'a>
(
instance: &erupt::InstanceLoader,
device: &erupt::DeviceLoader,
data: &mut AppData,
)
-> Result<(), &'a str>
{
let (depth_image, depth_image_memory) = create_image
(
instance,
device,
data,
data.swapchain_image_extent.width,
data.swapchain_image_extent.height,
1,
vk::SampleCountFlagBits::_1,
// data.msaa_samples,
vk::Format::D32_SFLOAT,
vk::ImageTiling::OPTIMAL,
vk::ImageUsageFlags::DEPTH_STENCIL_ATTACHMENT,
vk::MemoryPropertyFlags::DEVICE_LOCAL,
).unwrap();
data.depth_image = depth_image;
data.depth_image_memory = depth_image_memory;
data.depth_image_view = create_image_view
(
device,
data.depth_image,
vk::Format::D32_SFLOAT,
vk::ImageAspectFlags::DEPTH,
1
).unwrap();
Ok(())
}
unsafe fn get_depth_format
<'a>
(
instance: &erupt::InstanceLoader,
data: &AppData,
) -> Result<vk::Format, &'a str> {
let candidates = &[
vk::Format::D32_SFLOAT,
vk::Format::D32_SFLOAT_S8_UINT,
vk::Format::D24_UNORM_S8_UINT,
];
get_supported_format(
instance,
data,
candidates,
vk::ImageTiling::OPTIMAL,
vk::FormatFeatureFlags::DEPTH_STENCIL_ATTACHMENT,
)
}
unsafe fn create_image
<'a>
(
instance: &erupt::InstanceLoader,
device: &erupt::DeviceLoader,
data: &AppData,
width: u32,
height: u32,
mip_levels: u32,
samples: vk::SampleCountFlagBits,
format: vk::Format,
tiling: vk::ImageTiling,
usage: vk::ImageUsageFlags,
properties: vk::MemoryPropertyFlags,
)
-> Result<(vk::Image, vk::DeviceMemory), &'a str>
{
let info = vk::ImageCreateInfoBuilder::new()
.image_type(vk::ImageType::_2D)
.extent(vk::Extent3D {
width,
height,
depth: 1,
})
.mip_levels(mip_levels)
.array_layers(1)
.format(format)
.tiling(tiling)
.initial_layout(vk::ImageLayout::UNDEFINED)
.usage(usage)
.samples(samples)
.sharing_mode(vk::SharingMode::EXCLUSIVE);
let image = device.create_image(&info, None).unwrap();
let requirements = device.get_image_memory_requirements(image);
let info = vk::MemoryAllocateInfoBuilder::new()
.allocation_size(requirements.size)
.memory_type_index(get_memory_type_index(instance, data, properties, requirements).unwrap());
let image_memory = device.allocate_memory(&info, None).unwrap();
device.bind_image_memory(image, image_memory, 0).unwrap();
Ok((image, image_memory))
}
unsafe fn create_image_view
<'a>
(
device: &erupt::DeviceLoader,
image: vk::Image,
format: vk::Format,
aspects: vk::ImageAspectFlags,
mip_levels: u32,
)
-> Result<vk::ImageView, &'a str>
{
let subresource_range = vk::ImageSubresourceRangeBuilder::new()
.aspect_mask(aspects)
.base_mip_level(0)
.level_count(mip_levels)
.base_array_layer(0)
.layer_count(1);
let image_view_info = vk::ImageViewCreateInfoBuilder::new()
.image(image)
.view_type(vk::ImageViewType::_2D)
.format(format)
.subresource_range(*subresource_range);
Ok(device.create_image_view(&image_view_info, None).unwrap())
}
unsafe fn begin_single_time_commands
<'a>
(
device: &erupt::DeviceLoader,
data: &AppData,
)
-> Result<vk::CommandBuffer, &'a str>
{
let cb_info = vk::CommandBufferAllocateInfoBuilder::new()
.level(vk::CommandBufferLevel::PRIMARY)
.command_pool(data.command_pool)
.command_buffer_count(1);
let command_buffer = device.allocate_command_buffers(&cb_info).unwrap()[0];
let begin_info = vk::CommandBufferBeginInfoBuilder::new()
.flags(vk::CommandBufferUsageFlags::ONE_TIME_SUBMIT);
device.begin_command_buffer(command_buffer, &begin_info).unwrap();
Ok(command_buffer)
}
unsafe fn end_single_time_commands
<'a>
(
device: &erupt::DeviceLoader,
data: &AppData,
command_buffer: vk::CommandBuffer,
)
-> Result<(), &'a str>
{
device.end_command_buffer(command_buffer);
let command_buffers = &[command_buffer];
let info = vk::SubmitInfoBuilder::new()
.command_buffers(command_buffers);
println!("data.graphics_queue {:?}", data.graphics_queue);
device.queue_submit(data.graphics_queue, &[info], vk::Fence::null()).unwrap();
device.queue_wait_idle(data.graphics_queue).unwrap();
Ok(device.free_command_buffers(data.command_pool, &[command_buffer]))
}
unsafe fn create_command_buffers
<'a>
(
device: &erupt::DeviceLoader,
data: &mut AppData,
)
-> Result<(), &'a str>
{
data.command_buffers = vec![vk::CommandBuffer::null(); data.framebuffers.len()];
Ok(())
}
unsafe fn create_descriptor_sets
<'a>
(
device: &erupt::DeviceLoader,
data: &mut AppData,
)
-> Result<(), &'a str>
{
let layouts = &data.descriptor_set_layouts;
// Different from the example, he has only one layout; we also have only one,
// but want to generalize.
//sic, ours has more than one descrptor set layout. This could be wrong. Maybe there is only one possible, but the API called for multiple possble.
let info = vk::DescriptorSetAllocateInfoBuilder::new()
.descriptor_pool(data.descriptor_pool)
.set_layouts(&layouts);
// .build_dangling();
data.descriptor_sets = device.allocate_descriptor_sets(&info).unwrap().to_vec();
for i in 0..data.swapchain_images.len() {
let info = vk::DescriptorBufferInfoBuilder::new()
.buffer(data.uniform_buffers[i])
.offset(0)
.range(std::mem::size_of::<UniformBufferObject>() as u64);
let buffer_info = &[info];
let ubo_write = vk::WriteDescriptorSetBuilder::new()
.dst_set(data.descriptor_sets[0])
.dst_binding(0)
.dst_array_element(0)
.descriptor_type(vk::DescriptorType::UNIFORM_BUFFER)
.buffer_info(buffer_info);
// let info = vk::DescriptorImageInfoBuilder::new()
// .image_layout(vk::ImageLayout::SHADER)
// This part was for textures in the example; we don't have any of these yet.
// let info = vk::DescriptorImageInfoBuilder::new()
// .image_layout(::ImageLayout::SHADER_READ_ONLY_OPTIMAL)
let d_write_builder = vk::WriteDescriptorSetBuilder::new()
.dst_set(data.descriptor_sets[0])
.dst_binding(0)
.dst_array_element(0)
.descriptor_type(vk::DescriptorType::UNIFORM_BUFFER)
.buffer_info(buffer_info);
device.update_descriptor_sets(&[ubo_write], &[]);
// device.update_descriptor_sets(&[ubo_write], &[] as [vk::CopyDescriptorSet]);
}
Ok(())
}
unsafe fn get_supported_format
<'a>
(
instance: &erupt::InstanceLoader,
data: &AppData,
candidates: &[vk::Format],
tiling: vk::ImageTiling,
features: vk::FormatFeatureFlags,
)
-> Result<vk::Format, &'a str>
{
candidates
.iter()
.cloned()
.find(|f| {
let properties = instance.get_physical_device_format_properties(data.physical_device, *f);
match tiling {
vk::ImageTiling::LINEAR => properties.linear_tiling_features.contains(features),
vk::ImageTiling::OPTIMAL => properties.optimal_tiling_features.contains(features),
_ => false,
}
})
.ok_or_else(|| "false")
}
unsafe fn create_uniform_buffers
<'a>
(
instance: &erupt::InstanceLoader,
device: &erupt::DeviceLoader,
data: &mut AppData,
)
-> Result<(), &'a str>
{
data.uniform_buffers.clear();
data.uniform_buffers_memory.clear();
for _ in 0..data.swapchain_images.len() {
let (uniform_buffer, uniform_buffer_memory) = create_buffer
(
instance,
device,
std::mem::size_of::<UniformBufferObject>() as u64,
vk::BufferUsageFlags::UNIFORM_BUFFER,
2,
).unwrap();
data.uniform_buffers.push(uniform_buffer);
data.uniform_buffers_memory.push(uniform_buffer_memory);
}
Ok(())
}
unsafe fn get_memory_type_index
<'a>
(
instance: &erupt::InstanceLoader,
data: &AppData,
properties: vk::MemoryPropertyFlags,
requirements: vk::MemoryRequirements,
)
-> Result<u32, &'a str>
{
let memory = instance.get_physical_device_memory_properties(data.physical_device);
(0..memory.memory_type_count)
.find(|i| {
let suitable = (requirements.memory_type_bits & (i << i)) != 0;
let memory_type = memory.memory_types[*i as usize];
suitable && memory_type.property_flags.contains(properties)
})
.ok_or_else(|| "Failed to create suitable memory type")
}
unsafe fn create_index_buffer
<'a>
(
instance: &erupt::InstanceLoader,
device: &erupt::DeviceLoader,
data: &mut AppData,
)
-> Result<(), &'a str>
{
let size = (std::mem::size_of::<u32>() * data.indices.len()) as u64;
let (staging_buffer, staging_buffer_memory) = create_buffer
(
instance,
device,
size,
vk::BufferUsageFlags::TRANSFER_SRC,
2,
).unwrap();
let memory = device.map_memory
(
staging_buffer_memory,
0,
vk::WHOLE_SIZE,
vk::MemoryMapFlags::empty()
).unwrap() as *mut u32;
// memcpy(data.indices.as_ptr(), memory.cast(), data.indices.len());
memory.copy_from_nonoverlapping(data.indices.as_ptr(), data.indices.len());
device.unmap_memory(staging_buffer_memory);
let (index_buffer, index_buffer_memory) = create_buffer
(
instance,
device,
size,
vk::BufferUsageFlags::TRANSFER_DST | vk::BufferUsageFlags::INDEX_BUFFER,
1,
).unwrap();
copy_buffer(device, data, staging_buffer, index_buffer, size);
data.index_buffer = index_buffer;
data.index_buffer_memory = index_buffer_memory;
device.destroy_buffer(staging_buffer, None);
device.free_memory(staging_buffer_memory, None);
Ok(())
}
unsafe fn copy_buffer
<'a>
(
device: &erupt::DeviceLoader,
data: &AppData,
source: vk::Buffer,
destination: vk::Buffer,
size: vk::DeviceSize,
)
-> Result<(), &'a str>
{
let cbai = vk::CommandBufferAllocateInfoBuilder::new()
.command_pool(data.command_pool)
.level(vk::CommandBufferLevel::PRIMARY)
.command_buffer_count(1);
let cb = device.allocate_command_buffers(&cbai).unwrap()[0];
let cbbi = vk::CommandBufferBeginInfoBuilder::new()
.flags(vk::CommandBufferUsageFlags::ONE_TIME_SUBMIT);
device.begin_command_buffer(cb, &cbbi).unwrap();
let bci = vk::BufferCopyBuilder::new()
.src_offset(0)
.dst_offset(0)
.size(size);
device.cmd_copy_buffer(cb, source, destination, &[bci]);
device.end_command_buffer(cb).unwrap();
Ok(())
}
unsafe fn create_vertex_buffer
<'a>
(
instance: &erupt::InstanceLoader,
device: &erupt::DeviceLoader,
data: &mut AppData,
)
-> Result<(), &'a str>
{
println!("stnhstnhs");
let vb_size = (::std::mem::size_of::<VertexV3>() * data.vertices.len() * 10) as vk::DeviceSize;
// let vb_size = ((::std::mem::size_of_val(&(3.14 as f32))) * 9 * data.vertices.len()) as vk::DeviceSize;
let (staging_buffer, staging_buffer_memory) = create_buffer
(
instance,
device,
vb_size,
vk::BufferUsageFlags::TRANSFER_SRC,
2,
).unwrap();
let memory = device.map_memory
(
staging_buffer_memory,
0,
vk::WHOLE_SIZE,
vk::MemoryMapFlags::empty()
).unwrap() as *mut VertexV3;
memory.copy_from_nonoverlapping(data.vertices.as_ptr(), data.vertices.len());
device.unmap_memory(staging_buffer_memory);
let (vertex_buffer, vertex_buffer_memory) = create_buffer
(
instance,
device,
vb_size,
vk::BufferUsageFlags::TRANSFER_DST | vk::BufferUsageFlags::VERTEX_BUFFER,
1,
).unwrap();
copy_buffer(device, data, staging_buffer, vertex_buffer, vb_size);
data.vertex_buffer = vertex_buffer;
data.vertex_buffer_memory = vertex_buffer_memory;
device.destroy_buffer(staging_buffer, None);
device.free_memory(staging_buffer_memory, None);
Ok(())
}
unsafe fn create_framebuffers
<'a>
(
device: &erupt::DeviceLoader,
data: &mut AppData,
)
-> Result<(), &'a str>
{
data.framebuffers = data
.swapchain_image_views
.iter()
.map(|image_view| {
let attachments = vec![*image_view, data.depth_image_view];
let create_info = vk::FramebufferCreateInfoBuilder::new()
.render_pass(data.render_pass)
.attachments(&attachments)
.width(data.swapchain_image_extent.width)
.height(data.swapchain_image_extent.height)
.layers(1);
device.create_framebuffer(&create_info, None).unwrap()
}).collect();
Ok(())
}
fn load_model
<'a>
(data: &mut AppData)
-> Result<(), &'a str>
{
let model_path: & str = "assets/terrain__002__.obj";
let (models, materials) = tobj::load_obj(&model_path, &tobj::LoadOptions::default()).expect("Failed to load model object!");
let model = models[0].clone();
let materials = materials.unwrap();
let material = materials.clone().into_iter().nth(0).unwrap();
let mut vertices_terr = vec![];
let mesh = model.mesh;
let total_vertices_count = mesh.positions.len() / 3;
for i in 0..total_vertices_count {
let vertex = VertexV3 {
pos: [
mesh.positions[i * 3],
mesh.positions[i * 3 + 1],
mesh.positions[i * 3 + 2],
1.0,
],
color: [0.8, 0.20, 0.30, 0.40],
};
vertices_terr.push(vertex);
};
let mut indices_terr_full = mesh.indices.clone();
let mut indices_terr = vec![];
for i in 0..(indices_terr_full.len() / 2) {
indices_terr.push(indices_terr_full[i]);
}
data.vertices = vertices_terr;
data.indices = indices_terr;
Ok(())
}
unsafe fn create_sync_objects
<'a>
(
device: &erupt::DeviceLoader,
data: &mut AppData,
)
-> Result<(), &'a str>
{
let semaphore_info = vk::SemaphoreCreateInfoBuilder::new();
let fence_info = vk::FenceCreateInfoBuilder::new();
// .flags(vk::FenceCreateFlags::SIGNALED);
for _ in 0..MAX_FRAMES_IN_FLIGHT {
data.image_available_semaphores
.push(device.create_semaphore(&semaphore_info, None).unwrap());
data.render_finished_semaphores
.push(device.create_semaphore(&semaphore_info, None).unwrap());
data.in_flight_fences
.push(device.create_fence(&fence_info, None).unwrap());
}
data.images_in_flight = data.swapchain_images.iter().map(|_| vk::Fence::null()).collect();
Ok(())
}
// unsafe fn copy_buffer_to_image
// <'a>
// (
// device: &erupt::DeviceLoader,
// data: &AppData,
// buffer: vk::Buffer,
// image: vk::Image,
// width: u32,
// height: u32,
// )
// -> Result<(), &'a str>
// {
// let command_buffer = begin_single_time_commands(device, data).unwrap();
// let subresource = vk::ImageSubresourceLayersBuilder::new()
// .aspect_mask(vk::ImageAspectFlags::COLOR)
// .mip_level(0)
// .base_array_layer(0)
// .layer_count(1)
// .build();
// let region = vk::BufferImageCopyBuilder::new()
// .buffer_offset(0)
// .buffer_row_length(0)
// .buffer_image_height(0)
// .image_subresource(subresource)
// .image_offset(vk::Offset3D { x:0, y: 0, z: 0 })
// .image_extent(vk::Extent3D {
// width,
// height,
// depth: 1,
// });
// device.cmd_copy_buffer_to_image(
// command_buffer,
// buffer,
// image,
// vk::ImageLayout::TRANSFER_DST_OPTIMAL,
// &[region],
// );
// end_single_time_commands(device, data, command_buffer).unwrap();
// Ok(())
// }
// #[repr(C)]
// #[derive(Debug, Clone, Copy)]
// struct FrameData {
// present_semaphore: vk::Semaphore,
// render_semaphore: vk::Semaphore,
// command_pool: vk::CommandPool,
// command_buffer: vk::CommandBuffer,
// }
#[repr(C)]
#[derive(Debug, Clone, Copy)]
struct VertexV3 {
pos: [f32; 4],
color: [f32; 4],
}
#[repr(C)]
#[derive(Clone, Debug, Copy)]
struct PushConstants {
// model: Matrix4<f32>,
view: Matrix4<f32>,
// proj: Matrix4<f32>,
}
#[repr(C)]
#[derive(Clone, Debug, Copy)]
struct UniformBufferObject {
model: Matrix4<f32>, // model matrix of the terrain object.
proj: Matrix4<f32>,
}
#[derive(Debug, StructOpt)]
struct Opt {
#[structopt(short, long)]
validation_layers: bool,
}
// let attachments = vec![
// vk::AttachmentDescriptionBuilder::new()
// .format(data.format.format)
// .samples(vk::SampleCountFlagBits::_1)
// .load_op(vk::AttachmentLoadOp::CLEAR)
// .store_op(vk::AttachmentStoreOp::STORE)
// .stencil_load_op(vk::AttachmentLoadOp::DONT_CARE)
// .stencil_store_op(vk::AttachmentStoreOp::DONT_CARE)
// .initial_layout(vk::ImageLayout::UNDEFINED)
// .final_layout(vk::ImageLayout::PRESENT_SRC_KHR),
// vk::AttachmentDescriptionBuilder::new()
// .format(vk::Format::D32_SFLOAT)
// .samples(vk::SampleCountFlagBits::_1)
// .load_op(vk::AttachmentLoadOp::CLEAR)
// .store_op(vk::AttachmentStoreOp::DONT_CARE)
// .stencil_load_op(vk::AttachmentLoadOp::DONT_CARE)
// .stencil_store_op(vk::AttachmentStoreOp::DONT_CARE)
// .initial_layout(vk::ImageLayout::UNDEFINED)
// .final_layout(vk::ImageLayout::DEPTH_STENCIL_ATTACHMENT_OPTIMAL)
// ];
// let staging_buffer_info = vk::BufferCreateInfoBuilder::new()
// .size(vb_size)
// .usage(vk::BufferUsageFlags::TRANSFER_SRC)
// .sharing_mode(vk::SharingMode::EXCLUSIVE);
// let staging_buffer = device.create_buffer(&staging_buffer_info, None).unwrap();
// let requirements = device.get_buffer_memory_requirements(staging_buffer);
// let staging_memory_info = vk::MemoryAllocateInfoBuilder::new()
// .allocation_size(requirements.size)
// .memory_type_index(2);
// let staging_buffer_memory = device.allocate_memory(&staging_memory_info, None).unwrap();
// device.bind_buffer_memory(staging_buffer, staging_buffer_memory, 0).unwrap();
// let vertex_buffer_info = vk::BufferCreateInfoBuilder::new()
// .size(vb_size)
// .usage(vk::BufferUsageFlags::TRANSFER_DST | vk::BufferUsageFlags::VERTEX_BUFFER)
// .sharing_mode(vk::SharingMode::EXCLUSIVE);
// let vertex_buffer = device.create_buffer(&vertex_buffer_info, None).unwrap();
// let vbm_reqs = device.get_buffer_memory_requirements(vertex_buffer);
// let vb_memory_info = vk::MemoryAllocateInfoBuilder::new()
// .allocation_size(vbm_reqs.size)
// .memory_type_index(1);
// let vertex_buffer_memory = device.allocate_memory(&vb_memory_info, None).unwrap();
// device.bind_buffer_memory(vertex_buffer, vertex_buffer_memory, 0).unwrap();
// let cba_info = vk::CommandBufferAllocateInfoBuilder::new()
// .command_pool(data.command_pool)
// .level(vk::CommandBufferLevel::PRIMARY)
// .command_buffer_count(1);
// let cb = device.allocate_command_buffers(&cba_info).unwrap()[0];
// let cb_begin_info = vk::CommandBufferBeginInfoBuilder::new()
// .flags(vk::CommandBufferUsageFlags::ONE_TIME_SUBMIT);
// device.begin_command_buffer(cb, &cb_begin_info).unwrap();
// let buffer_copy_info = vk::BufferCopyBuilder::new()
// .src_offset(0)
// .dst_offset(0)
// .size(vb_size);
// device.cmd_copy_buffer(cb, staging_buffer, vertex_buffer, &[buffer_copy_info]);
// device.end_command_buffer(cb).unwrap();
// Vulkan Scratch Routine 2400 ----------------- -- -- -- ---- -
use erupt::{
cstr,
utils::{self, surface},
vk, DeviceLoader, EntryLoader, InstanceLoader,
vk::{Device, MemoryMapFlags},
};
use cgmath::{Deg, Rad, Matrix4, Point3, Vector3, Vector4};
use std::{
ffi::{c_void, CStr, CString},
fs,
fs::{write, OpenOptions},
io::prelude::*,
mem::*,
os::raw::c_char,
ptr,
result::Result,
result::Result::*,
string::String,
sync::Arc,
thread,
time,
};
use smallvec::SmallVec;
use raw_window_handle::{HasRawWindowHandle, RawWindowHandle};
use memoffset::offset_of;
use simple_logger::SimpleLogger;
use winit::{
dpi::PhysicalSize,
event::{
Event, KeyboardInput, WindowEvent,
ElementState, StartCause, VirtualKeyCode,
DeviceEvent,
},
event_loop::{ControlFlow, EventLoop},
window::WindowBuilder,
window::Window
};
use structopt::StructOpt;
const TITLE: &str = "vulkan-routine-2400";
const FRAMES_IN_FLIGHT: usize = 2;
const LAYER_KHRONOS_VALIDATION: *const c_char = cstr!("VK_LAYER_KHRONOS_validation");
const SHADER_VERT: &[u8] = include_bytes!("../spv/s_300__.vert.spv");
const SHADER_FRAG: &[u8] = include_bytes!("../spv/s1.frag.spv");
#[repr(C)]
#[derive(Debug, Clone, Copy)]
struct FrameData {
present_semaphore: vk::Semaphore,
render_semaphore: vk::Semaphore,
command_pool: vk::CommandPool,
command_buffer: vk::CommandBuffer,
}
#[repr(C)]
#[derive(Debug, Clone, Copy)]
struct VertexV3 {
pos: [f32; 4],
color: [f32; 4],
}
#[repr(C)]
#[derive(Clone, Debug, Copy)]
struct UniformBufferObject {
model: Matrix4<f32>,
view: Matrix4<f32>,
proj: Matrix4<f32>,
}
#[derive(Debug, StructOpt)]
struct Opt {
#[structopt(short, long)]
validation_layers: bool,
}
// static mut log_300: Vec<String> = vec!();
unsafe extern "system" fn debug_callback(
_message_severity: vk::DebugUtilsMessageSeverityFlagBitsEXT,
_message_types: vk::DebugUtilsMessageTypeFlagsEXT,
p_callback_data: *const vk::DebugUtilsMessengerCallbackDataEXT,
_p_user_data: *mut c_void,
) -> vk::Bool32 {
let str_99 = String::from(CStr::from_ptr((*p_callback_data).p_message).to_string_lossy());
// log_300.push(str_99 );
eprintln!(
"{}",
CStr::from_ptr((*p_callback_data).p_message).to_string_lossy()
);
vk::FALSE
}
pub unsafe fn vulkan_routine_6300
()
{
println!("\n6300\n");
routine_pure_procedural();
}
// create a monolith sort of like we start with app.
// we are basically going to reproduce backup, but change names of stuff, and organize it better,
// in order to be able to turn it into an object.
unsafe fn routine_pure_procedural
()
{
let opt = Opt { validation_layers: true };
let event_loop = EventLoop::new();
let window = WindowBuilder::new()
.with_title(TITLE)
.with_resizable(false)
.build(&event_loop)
.unwrap();
let entry = Arc::new(EntryLoader::new().unwrap());
let application_name = CString::new("Vulkan-Routine-6300").unwrap();
let engine_name = CString::new("Peregrine").unwrap();
let app_info = vk::ApplicationInfoBuilder::new()
.application_name(&application_name)
.application_version(vk::make_api_version(0, 1, 0, 0))
.engine_name(&engine_name)
.engine_version(vk::make_api_version(0, 1, 0, 0))
.api_version(vk::make_api_version(0, 1, 0, 0));
let mut instance_extensions = surface::enumerate_required_extensions(&window).unwrap();
if opt.validation_layers {
instance_extensions.push(vk::EXT_DEBUG_UTILS_EXTENSION_NAME);
}
let mut instance_layers = Vec::new();
if opt.validation_layers {
instance_layers.push(LAYER_KHRONOS_VALIDATION);
}
let device_extensions = vec![
vk::KHR_SWAPCHAIN_EXTENSION_NAME,
vk::KHR_RAY_TRACING_PIPELINE_EXTENSION_NAME,
vk::KHR_RAY_QUERY_EXTENSION_NAME,
vk::KHR_DEFERRED_HOST_OPERATIONS_EXTENSION_NAME,
vk::KHR_ACCELERATION_STRUCTURE_EXTENSION_NAME,
vk::KHR_SHADER_FLOAT_CONTROLS_EXTENSION_NAME,
vk::KHR_SPIRV_1_4_EXTENSION_NAME,
vk::KHR_BUFFER_DEVICE_ADDRESS_EXTENSION_NAME,
vk::EXT_DESCRIPTOR_INDEXING_EXTENSION_NAME,
];
let mut device_layers = Vec::new();
if opt.validation_layers {
device_layers.push(LAYER_KHRONOS_VALIDATION);
}
let instance_info = vk::InstanceCreateInfoBuilder::new()
.application_info(&app_info)
.enabled_extension_names(&instance_extensions)
.enabled_layer_names(&instance_layers);
let instance = Arc::new(InstanceLoader::new(&entry, &instance_info).unwrap());
let messenger = if opt.validation_layers {
let messenger_info = vk::DebugUtilsMessengerCreateInfoEXTBuilder::new()
.message_severity(
vk::DebugUtilsMessageSeverityFlagsEXT::VERBOSE_EXT
| vk::DebugUtilsMessageSeverityFlagsEXT::WARNING_EXT
| vk::DebugUtilsMessageSeverityFlagsEXT::ERROR_EXT,
)
.message_type(
vk::DebugUtilsMessageTypeFlagsEXT::GENERAL_EXT
| vk::DebugUtilsMessageTypeFlagsEXT::VALIDATION_EXT
| vk::DebugUtilsMessageTypeFlagsEXT::PERFORMANCE_EXT,
)
.pfn_user_callback(Some(debug_callback));
instance.create_debug_utils_messenger_ext(&messenger_info, None).expect("problem creating debug util.");
} else {
Default::default()
};
let surface = surface::create_surface(&instance, &window, None).unwrap();
let (physical_device, queue_family, format, present_mode, device_properties) = create_precursors
(
&instance,
surface.clone(),
).unwrap();
println!("\n\n\nUsing physical device: {:?}\n\n\n", CStr::from_ptr(device_properties.device_name.as_ptr()));
let queue_info = vec![vk::DeviceQueueCreateInfoBuilder::new()
.queue_family_index(queue_family)
.queue_priorities(&[1.0])];
let features = vk::PhysicalDeviceFeaturesBuilder::new();
let device_info = vk::DeviceCreateInfoBuilder::new()
.queue_create_infos(&queue_info)
.enabled_features(&features)
.enabled_extension_names(&device_extensions)
.enabled_layer_names(&device_layers);
let device = Arc::new(DeviceLoader::new(&instance, physical_device, &device_info).unwrap());
let queue = device.get_device_queue(queue_family, 0);
let surface_caps = instance.get_physical_device_surface_capabilities_khr(physical_device, surface).unwrap();
let mut image_count = surface_caps.min_image_count + 1;
if surface_caps.max_image_count > 0 && image_count > surface_caps.max_image_count {
image_count = surface_caps.max_image_count;
}
let swapchain_image_extent = match surface_caps.current_extent {
vk::Extent2D {
width: u32::MAX,
height: u32::MAX,
} => {
let PhysicalSize { width, height } = window.inner_size();
vk::Extent2D { width, height }
}
normal => normal,
};
let swapchain_info = vk::SwapchainCreateInfoKHRBuilder::new()
.surface(surface)
.min_image_count(image_count)
.image_format(format.format)
.image_color_space(format.color_space)
.image_extent(swapchain_image_extent)
.image_array_layers(1)
.image_usage(vk::ImageUsageFlags::COLOR_ATTACHMENT)
.image_sharing_mode(vk::SharingMode::EXCLUSIVE)
.pre_transform(surface_caps.current_transform)
.composite_alpha(vk::CompositeAlphaFlagBitsKHR::OPAQUE_KHR)
.present_mode(present_mode)
.clipped(true)
.old_swapchain(vk::SwapchainKHR::null());
let swapchain = device.create_swapchain_khr(&swapchain_info, None).unwrap();
let swapchain_images = device.get_swapchain_images_khr(swapchain, None).unwrap();
let swapchain_image_views: Vec<_> = swapchain_images
.iter()
.map(|swapchain_image| {
let image_view_info = vk::ImageViewCreateInfoBuilder::new()
.image(*swapchain_image)
.view_type(vk::ImageViewType::_2D)
.format(format.format)
.components(vk::ComponentMapping {
r: vk::ComponentSwizzle::IDENTITY,
g: vk::ComponentSwizzle::IDENTITY,
b: vk::ComponentSwizzle::IDENTITY,
a: vk::ComponentSwizzle::IDENTITY,
})
.subresource_range(
vk::ImageSubresourceRangeBuilder::new()
.aspect_mask(vk::ImageAspectFlags::COLOR)
.base_mip_level(0)
.level_count(1)
.base_array_layer(0)
.layer_count(1)
.build(),
);
device.create_image_view(&image_view_info, None).unwrap()
})
.collect();
let command_pool_info = vk::CommandPoolCreateInfoBuilder::new()
.queue_family_index(queue_family)
.flags(vk::CommandPoolCreateFlags::RESET_COMMAND_BUFFER);
let command_pool = device.create_command_pool(&command_pool_info, None).unwrap();
let (mut vertices_terr, mut indices_terr) = load_model().unwrap();
let physical_device_memory_properties = instance.get_physical_device_memory_properties(physical_device);
let vb_size = ((::std::mem::size_of_val(&(3.14 as f32))) * 9 * vertices_terr.len()) as vk::DeviceSize;
let ib_size = (::std::mem::size_of_val(&(10 as u32)) * indices_terr.len()) as vk::DeviceSize;
let ib = buffer_indices
(
&device,
queue,
command_pool,
&mut vertices_terr,
&mut indices_terr,
).unwrap();
let info = vk::BufferCreateInfoBuilder::new()
.size(vb_size)
.usage(vk::BufferUsageFlags::TRANSFER_SRC)
.sharing_mode(vk::SharingMode::EXCLUSIVE);
let sb = device.create_buffer(&info, None).expect("Buffer create fail.");
let mem_reqs = device.get_buffer_memory_requirements(sb);
let info = vk::MemoryAllocateInfoBuilder::new()
.allocation_size(mem_reqs.size)
.memory_type_index(2);
let sb_mem = device.allocate_memory(&info, None).unwrap();
device.bind_buffer_memory(sb, sb_mem, 0).expect("Bind memory fail.");
let data_ptr = device.map_memory(
sb_mem,
0,
vk::WHOLE_SIZE,
vk::MemoryMapFlags::empty(),
).unwrap() as *mut VertexV3;
data_ptr.copy_from_nonoverlapping(vertices_terr.as_ptr(), vertices_terr.len());
device.unmap_memory(sb_mem);
let info = vk::BufferCreateInfoBuilder::new()
.size(vb_size)
.usage(vk::BufferUsageFlags::TRANSFER_DST | vk::BufferUsageFlags::VERTEX_BUFFER)
.sharing_mode(vk::SharingMode::EXCLUSIVE);
let vb = device.create_buffer(&info, None).expect("Create buffer fail.");
let mem_reqs = device.get_buffer_memory_requirements(vb);
let info = vk::MemoryAllocateInfoBuilder::new()
.allocation_size(mem_reqs.size)
.memory_type_index(1);
let vb_mem = device.allocate_memory(&info, None).unwrap();
device.bind_buffer_memory(vb, vb_mem, 0).expect("Bind memory fail.");
let info = vk::CommandBufferAllocateInfoBuilder::new()
.command_pool(command_pool)
.level(vk::CommandBufferLevel::PRIMARY)
.command_buffer_count(1);
let cb = device.allocate_command_buffers(&info).unwrap()[0];
let info = vk::CommandBufferBeginInfoBuilder::new()
.flags(vk::CommandBufferUsageFlags::ONE_TIME_SUBMIT);
device.begin_command_buffer(cb, &info).expect("Begin command buffer fail.");
let info = vk::BufferCopyBuilder::new()
.src_offset(0)
.dst_offset(0)
.size(vb_size);
device.cmd_copy_buffer(cb, sb, vb, &[info]);
device.end_command_buffer(cb).expect("End command buffer fail.");
let slice = &[cb];
let info = vk::SubmitInfoBuilder::new()
.wait_semaphores(&[])
.command_buffers(slice)
.signal_semaphores(&[]);
device.queue_submit(queue, &[info], vk::Fence::null()).expect("Queue submit fail.");
let info = vk::DescriptorSetLayoutBindingFlagsCreateInfoBuilder::new()
.binding_flags(&[vk::DescriptorBindingFlags::empty()]);
let samplers = [vk::Sampler::null()];
let binding = vk::DescriptorSetLayoutBindingBuilder::new()
.binding(0)
.descriptor_type(vk::DescriptorType::UNIFORM_BUFFER)
.descriptor_count(1)
.stage_flags(vk::ShaderStageFlags::VERTEX)
.immutable_samplers(&samplers);
let slice = &[binding];
let info = vk::DescriptorSetLayoutCreateInfoBuilder::new()
.flags(vk::DescriptorSetLayoutCreateFlags::empty()) // ? https://docs.rs/erupt/0.22.0+204/erupt/vk1_0/struct.DescriptorSetLayoutCreateFlags.html
.bindings(slice);
let descriptor_set_layout = device.create_descriptor_set_layout(&info, None).unwrap();
let ubo_size = ::std::mem::size_of::<UniformBufferObject>();
let mut uniform_buffers: Vec<vk::Buffer> = vec![];
let mut uniform_buffers_memories: Vec<vk::DeviceMemory> = vec![];
let swapchain_image_count = swapchain_images.len();
for _ in 0..swapchain_image_count {
let (uniform_buffer, uniform_buffer_memory) = create_buffer(
&device,
ubo_size as u64,
vk::BufferUsageFlags::UNIFORM_BUFFER,
2,
);
uniform_buffers.push(uniform_buffer);
uniform_buffers_memories.push(uniform_buffer_memory);
}
let mut uniform_transform = UniformBufferObject {
model:
// Matrix4::from_translation(Vector3::new(5.0, 5.0, 5.0))
Matrix4::from_angle_y(Deg(10.0))
* Matrix4::from_nonuniform_scale(0.5, 0.5, 0.5),
view: Matrix4::look_at_rh(
Point3::new(0.80, 0.80, 0.80),
Point3::new(0.0, 0.0, 0.0),
Vector3::new(0.0, 0.0, 1.0),
),
proj: {
let mut proj = cgmath::perspective(
Deg(30.0),
swapchain_image_extent.width as f32
/ swapchain_image_extent.height as f32,
0.1,
10.0,
);
proj[1][1] = proj[1][1] * -1.0;
proj
},
};
let pool_size = vk::DescriptorPoolSizeBuilder::new()
._type(vk::DescriptorType::UNIFORM_BUFFER)
.descriptor_count(swapchain_image_count as u32);
let pool_sizes = &[pool_size];
let set_layouts = &[descriptor_set_layout];
let pool_info = vk::DescriptorPoolCreateInfoBuilder::new()
.pool_sizes(pool_sizes)
.max_sets(swapchain_image_count as u32);
let desc_pool = device.create_descriptor_pool(&pool_info, None).unwrap();
let d_set_alloc_info = vk::DescriptorSetAllocateInfoBuilder::new()
.descriptor_pool(desc_pool)
.set_layouts(set_layouts);
let d_sets = device.allocate_descriptor_sets(&d_set_alloc_info).expect("failed in alloc DescriptorSet");
let ubo_size = ::std::mem::size_of::<UniformBufferObject>() as u64;
for i in 0..swapchain_image_count {
let d_buf_info = vk::DescriptorBufferInfoBuilder::new()
.buffer(uniform_buffers[i])
.offset(0)
.range(ubo_size);
let d_buf_infos = [d_buf_info];
let d_write_builder = vk::WriteDescriptorSetBuilder::new()
.dst_set(d_sets[0])
.dst_binding(0)
.dst_array_element(0)
.descriptor_type(vk::DescriptorType::UNIFORM_BUFFER)
.buffer_info(&d_buf_infos);
let d_write_sets = [d_write_builder];
device.update_descriptor_sets(&d_write_sets, &[]);
update_uniform_buffer
(
&device,
&mut uniform_transform,
&mut uniform_buffers_memories,
&mut uniform_buffers,
i as usize,
2.3
);
}
let depth_image_info = vk::ImageCreateInfoBuilder::new()
.flags(vk::ImageCreateFlags::empty())
.image_type(vk::ImageType::_2D)
.format(vk::Format::D32_SFLOAT)
.extent(vk::Extent3D {
width: swapchain_image_extent.width,
height: swapchain_image_extent.height,
depth: 1,
})
.mip_levels(1)
.array_layers(1)
.samples(vk::SampleCountFlagBits::_1)
.tiling(vk::ImageTiling::OPTIMAL)
.usage(vk::ImageUsageFlags::DEPTH_STENCIL_ATTACHMENT)
.sharing_mode(vk::SharingMode::EXCLUSIVE)
.queue_family_indices(&[0])
.initial_layout(vk::ImageLayout::UNDEFINED);
let depth_image = device.create_image(&depth_image_info, None)
.expect("Failed to create depth (texture) Image.");
let dpth_img_mem_reqs = device.get_image_memory_requirements(depth_image);
let dpth_img_mem_info = vk::MemoryAllocateInfoBuilder::new()
.memory_type_index(1)
.allocation_size(dpth_img_mem_reqs.size);
let depth_image_memory = device.allocate_memory(&dpth_img_mem_info, None)
.expect("Failed to alloc mem for depth image.");
device.bind_image_memory(depth_image, depth_image_memory, 0)
.expect("Failed to bind depth image memory.");
let depth_image_view_info = vk::ImageViewCreateInfoBuilder::new()
.flags(vk::ImageViewCreateFlags::empty())
.image(depth_image)
.view_type(vk::ImageViewType::_2D)
.format(vk::Format::D32_SFLOAT)
.components(vk::ComponentMapping {
r: vk::ComponentSwizzle::IDENTITY,
g: vk::ComponentSwizzle::IDENTITY,
b: vk::ComponentSwizzle::IDENTITY,
a: vk::ComponentSwizzle::IDENTITY,
})
.subresource_range(vk::ImageSubresourceRange {
aspect_mask: vk::ImageAspectFlags::DEPTH,
base_mip_level: 0,
level_count: 1,
base_array_layer: 0,
layer_count: 1,
});
let depth_image_view = device.create_image_view(&depth_image_view_info, None)
.expect("Failed to create image view.");
let entry_point = CString::new("main").unwrap();
let vert_decoded = utils::decode_spv(SHADER_VERT).unwrap();
let module_info = vk::ShaderModuleCreateInfoBuilder::new().code(&vert_decoded);
let shader_vert = device.create_shader_module(&module_info, None).unwrap();
let frag_decoded = utils::decode_spv(SHADER_FRAG).unwrap();
let module_info = vk::ShaderModuleCreateInfoBuilder::new().code(&frag_decoded);
let shader_frag = device.create_shader_module(&module_info, None).unwrap();
let shader_stages = vec![
vk::PipelineShaderStageCreateInfoBuilder::new()
.stage(vk::ShaderStageFlagBits::VERTEX)
.module(shader_vert)
.name(&entry_point),
vk::PipelineShaderStageCreateInfoBuilder::new()
.stage(vk::ShaderStageFlagBits::FRAGMENT)
.module(shader_frag)
.name(&entry_point),
];
let vertex_buffer_bindings_desc_info = vk::VertexInputBindingDescriptionBuilder::new()
.binding(0)
.stride(std::mem::size_of::<VertexV3>() as u32)
.input_rate(vk::VertexInputRate::VERTEX);
let vert_buff_att_desc_info_pos = vk::VertexInputAttributeDescriptionBuilder::new()
.location(0)
.binding(0)
.format(vk::Format::R32G32B32A32_SFLOAT)
.offset(offset_of!(VertexV3, pos) as u32,);
let vert_buff_att_desc_info_color = vk::VertexInputAttributeDescriptionBuilder::new()
.location(1)
.binding(0)
.format(vk::Format::R32G32B32A32_SFLOAT)
.offset(offset_of!(VertexV3, color) as u32,);
let vertex_input = vk::PipelineVertexInputStateCreateInfoBuilder::new()
.flags(vk::PipelineVertexInputStateCreateFlags::empty())
.vertex_binding_descriptions(&[vertex_buffer_bindings_desc_info])
.vertex_attribute_descriptions(&[vert_buff_att_desc_info_pos, vert_buff_att_desc_info_color])
.build_dangling();
let input_assembly = vk::PipelineInputAssemblyStateCreateInfoBuilder::new()
.topology(vk::PrimitiveTopology::TRIANGLE_LIST)
.primitive_restart_enable(false);
let viewports = vec![vk::ViewportBuilder::new()
.x(0.0)
.y(0.0)
.width(swapchain_image_extent.width as f32)
.height(swapchain_image_extent.height as f32)
.min_depth(0.0)
.max_depth(1.0)];
let scissors = vec![vk::Rect2DBuilder::new()
.offset(vk::Offset2D { x: 0, y: 0 })
.extent(swapchain_image_extent)];
let viewport_state = vk::PipelineViewportStateCreateInfoBuilder::new()
.viewports(&viewports)
.scissors(&scissors);
let rasterizer = vk::PipelineRasterizationStateCreateInfoBuilder::new()
.depth_clamp_enable(true)
.rasterizer_discard_enable(false)
.polygon_mode(vk::PolygonMode::LINE)
.line_width(1.0)
.cull_mode(vk::CullModeFlags::NONE)
.front_face(vk::FrontFace::COUNTER_CLOCKWISE);
let multisampling = vk::PipelineMultisampleStateCreateInfoBuilder::new()
.sample_shading_enable(false)
.rasterization_samples(vk::SampleCountFlagBits::_1);
let color_blend_attachments = vec![vk::PipelineColorBlendAttachmentStateBuilder::new()
.color_write_mask(
vk::ColorComponentFlags::R
| vk::ColorComponentFlags::G
| vk::ColorComponentFlags::B
| vk::ColorComponentFlags::A,
)
.blend_enable(false)];
let color_blending = vk::PipelineColorBlendStateCreateInfoBuilder::new()
.logic_op_enable(false)
.attachments(&color_blend_attachments);
let pipeline_stencil_info = vk::PipelineDepthStencilStateCreateInfoBuilder::new()
.depth_test_enable(false)
.depth_write_enable(true)
.depth_compare_op(vk::CompareOp::LESS)
.depth_bounds_test_enable(false)
.min_depth_bounds(0.0)
.max_depth_bounds(1.0)
.front(vk::StencilOpStateBuilder::new().build())
.back(vk::StencilOpStateBuilder::new().build());
let desc_layouts_slc = &[descriptor_set_layout];
let pipeline_layout_info = vk::PipelineLayoutCreateInfoBuilder::new()
.set_layouts(desc_layouts_slc);
let pipeline_layout = device.create_pipeline_layout(&pipeline_layout_info, None).unwrap();
let attachments = vec![
vk::AttachmentDescriptionBuilder::new()
.format(format.format)
.samples(vk::SampleCountFlagBits::_1)
.load_op(vk::AttachmentLoadOp::CLEAR)
.store_op(vk::AttachmentStoreOp::STORE)
.stencil_load_op(vk::AttachmentLoadOp::DONT_CARE)
.stencil_store_op(vk::AttachmentStoreOp::DONT_CARE)
.initial_layout(vk::ImageLayout::UNDEFINED)
.final_layout(vk::ImageLayout::PRESENT_SRC_KHR),
vk::AttachmentDescriptionBuilder::new()
.format(vk::Format::D32_SFLOAT)
.samples(vk::SampleCountFlagBits::_1)
.load_op(vk::AttachmentLoadOp::CLEAR)
.store_op(vk::AttachmentStoreOp::DONT_CARE)
.stencil_load_op(vk::AttachmentLoadOp::DONT_CARE)
.stencil_store_op(vk::AttachmentStoreOp::DONT_CARE)
.initial_layout(vk::ImageLayout::UNDEFINED)
.final_layout(vk::ImageLayout::DEPTH_STENCIL_ATTACHMENT_OPTIMAL)
];
let depth_attach_ref = vk::AttachmentReferenceBuilder::new()
.attachment(1)
.layout(vk::ImageLayout::DEPTH_STENCIL_ATTACHMENT_OPTIMAL);
let color_attachment_refs = vec![vk::AttachmentReferenceBuilder::new()
.attachment(0)
.layout(vk::ImageLayout::COLOR_ATTACHMENT_OPTIMAL)];
let subpasses = vec![vk::SubpassDescriptionBuilder::new()
.pipeline_bind_point(vk::PipelineBindPoint::GRAPHICS)
.color_attachments(&color_attachment_refs)
.depth_stencil_attachment(&depth_attach_ref)];
let dependencies = vec![vk::SubpassDependencyBuilder::new()
.src_subpass(vk::SUBPASS_EXTERNAL)
.dst_subpass(0)
.src_stage_mask(vk::PipelineStageFlags::COLOR_ATTACHMENT_OUTPUT)
.src_access_mask(vk::AccessFlags::empty())
.dst_stage_mask(vk::PipelineStageFlags::COLOR_ATTACHMENT_OUTPUT)
.dst_access_mask(vk::AccessFlags::COLOR_ATTACHMENT_WRITE)];
let render_pass_info = vk::RenderPassCreateInfoBuilder::new()
.attachments(&attachments)
.subpasses(&subpasses)
.dependencies(&dependencies);
let render_pass = device.create_render_pass(&render_pass_info, None).unwrap();
let pipeline_info = vk::GraphicsPipelineCreateInfoBuilder::new()
.stages(&shader_stages)
.vertex_input_state(&vertex_input)
.input_assembly_state(&input_assembly)
.depth_stencil_state(&pipeline_stencil_info)
.viewport_state(&viewport_state)
.rasterization_state(&rasterizer)
.multisample_state(&multisampling)
.color_blend_state(&color_blending)
.layout(pipeline_layout)
.render_pass(render_pass)
.subpass(0);
let pipeline = device.create_graphics_pipelines(vk::PipelineCache::null(), &[pipeline_info], None).unwrap()[0];
let swapchain_framebuffers: Vec<_> = swapchain_image_views
.iter()
.map(|image_view| {
let attachments = vec![*image_view, depth_image_view];
let framebuffer_info = vk::FramebufferCreateInfoBuilder::new()
.render_pass(render_pass)
.attachments(&attachments)
.width(swapchain_image_extent.width)
.height(swapchain_image_extent.height)
.layers(1);
device.create_framebuffer(&framebuffer_info, None).unwrap()
})
.collect();
let cmd_buf_allocate_info = vk::CommandBufferAllocateInfoBuilder::new()
.command_pool(command_pool)
.level(vk::CommandBufferLevel::PRIMARY)
.command_buffer_count(swapchain_framebuffers.len() as _);
let cmd_bufs = device.allocate_command_buffers(&cmd_buf_allocate_info).unwrap();
for (&cmd_buf, &framebuffer) in cmd_bufs.iter().zip(swapchain_framebuffers.iter()) {
let cmd_buf_begin_info = vk::CommandBufferBeginInfoBuilder::new();
device.begin_command_buffer(cmd_buf, &cmd_buf_begin_info).unwrap();
let clear_values = vec![
vk::ClearValue {
color: vk::ClearColorValue {
float32: [0.0, 0.0, 0.0, 1.0],
},
},
vk::ClearValue {
depth_stencil: vk::ClearDepthStencilValue {
depth: 1.0,
stencil: 0,
},
},
];
let render_pass_begin_info = vk::RenderPassBeginInfoBuilder::new()
.render_pass(render_pass)
.framebuffer(framebuffer)
.render_area(vk::Rect2D {
offset: vk::Offset2D { x: 0, y: 0 },
extent: swapchain_image_extent,
})
.clear_values(&clear_values);
device.cmd_begin_render_pass(
cmd_buf,
&render_pass_begin_info,
vk::SubpassContents::INLINE,
);
device.cmd_bind_pipeline(cmd_buf, vk::PipelineBindPoint::GRAPHICS, pipeline);
device.cmd_bind_index_buffer(cmd_buf, ib, 0, vk::IndexType::UINT32);
device.cmd_bind_vertex_buffers(cmd_buf, 0, &[vb], &[0]);
device.cmd_bind_descriptor_sets(cmd_buf, vk::PipelineBindPoint::GRAPHICS, pipeline_layout, 0, &d_sets, &[]);
device.cmd_draw_indexed(cmd_buf, (indices_terr.len()) as u32, ((indices_terr.len()) / 3) as u32, 0, 0, 0);
device.cmd_end_render_pass(cmd_buf);
device.end_command_buffer(cmd_buf).unwrap();
}
let semaphore_info = vk::SemaphoreCreateInfoBuilder::new();
let image_available_semaphores: Vec<_> = (0..FRAMES_IN_FLIGHT)
.map(|_| device.create_semaphore(&semaphore_info, None).unwrap())
.collect();
let render_finished_semaphores: Vec<_> = (0..FRAMES_IN_FLIGHT)
.map(|_| device.create_semaphore(&semaphore_info, None).unwrap())
.collect();
let fence_info = vk::FenceCreateInfoBuilder::new().flags(vk::FenceCreateFlags::SIGNALED);
let in_flight_fences: Vec<_> = (0..FRAMES_IN_FLIGHT)
.map(|_| device.create_fence(&fence_info, None).unwrap())
.collect();
let mut images_in_flight: Vec<_> = swapchain_images.iter().map(|_| vk::Fence::null()).collect();
let mut frame = 0;
#[allow(clippy::collapsible_match, clippy::single_match)]
event_loop.run(move |event, _, control_flow| match event {
Event::NewEvents(StartCause::Init) => {
*control_flow = ControlFlow::Poll;
}
Event::WindowEvent { event, .. } => match event {
WindowEvent::CloseRequested => *control_flow = ControlFlow::Exit,
_ => (),
},
Event::DeviceEvent { event, .. } => match event {
DeviceEvent::Key(KeyboardInput {
virtual_keycode: Some(keycode),
state,
..
}) => match (keycode, state) {
(VirtualKeyCode::Escape, ElementState::Released) => {
*control_flow = ControlFlow::Exit
}
_ => (),
},
_ => (),
},
Event::MainEventsCleared => {
device.wait_for_fences(&[in_flight_fences[frame]], true, u64::MAX).unwrap();
let image_index = device.acquire_next_image_khr
(
swapchain,
u64::MAX,
image_available_semaphores[frame],
vk::Fence::null(),
).unwrap();
update_uniform_buffer(&device, &mut uniform_transform, &mut uniform_buffers_memories, &mut uniform_buffers, image_index as usize, 3.2);
let image_in_flight = images_in_flight[image_index as usize];
if !image_in_flight.is_null() {
device.wait_for_fences(&[image_in_flight], true, u64::MAX).unwrap();
}
images_in_flight[image_index as usize] = in_flight_fences[frame];
let wait_semaphores = vec![image_available_semaphores[frame]];
let command_buffers = vec![cmd_bufs[image_index as usize]];
let signal_semaphores = vec![render_finished_semaphores[frame]];
let submit_info = vk::SubmitInfoBuilder::new()
.wait_semaphores(&wait_semaphores)
.wait_dst_stage_mask(&[vk::PipelineStageFlags::COLOR_ATTACHMENT_OUTPUT])
.command_buffers(&command_buffers)
.signal_semaphores(&signal_semaphores);
let in_flight_fence = in_flight_fences[frame];
device.reset_fences(&[in_flight_fence]).unwrap();
device
.queue_submit(queue, &[submit_info], in_flight_fence)
.unwrap();
let swapchains = vec![swapchain];
let image_indices = vec![image_index];
let present_info = vk::PresentInfoKHRBuilder::new()
.wait_semaphores(&signal_semaphores)
.swapchains(&swapchains)
.image_indices(&image_indices);
device.queue_present_khr(queue, &present_info).unwrap();
frame = (frame + 1) % FRAMES_IN_FLIGHT;
}
// Event::LoopDestroyed => unsafe {
// device.device_wait_idle().unwrap();
// for &semaphore in image_available_semaphores
// .iter()
// .chain(render_finished_semaphores.iter())
// {
// device.destroy_semaphore(semaphore, None);
// }
// for &fence in &in_flight_fences {
// device.destroy_fence(fence, None);
// }
// device.destroy_command_pool(command_pool, None);
// for &framebuffer in &swapchain_framebuffers {
// device.destroy_framebuffer(framebuffer, None);
// }
// device.destroy_pipeline(pipeline, None);
// device.destroy_render_pass(render_pass, None);
// device.destroy_pipeline_layout(pipeline_layout, None);
// device.destroy_shader_module(shader_vert, None);
// device.destroy_shader_module(shader_frag, None);
// for &image_view in &swapchain_image_views {
// device.destroy_image_view(image_view, None);
// }
// device.destroy_swapchain_khr(swapchain, None);
// device.destroy_device(None);
// instance.destroy_surface_khr(surface, None);
// if !messenger.is_null() {
// instance.destroy_debug_utils_messenger_ext(messenger, None);
// }
// instance.destroy_instance(None);
// println!("Exited cleanly");
// },
_ => (),
})
}
unsafe fn update_uniform_buffer(
device: &DeviceLoader,
uniform_transform: &mut UniformBufferObject,
ubo_mems: &mut Vec<vk::DeviceMemory>,
ubos: &mut Vec<vk::Buffer>,
current_image: usize,
delta_time: f32
)
{
uniform_transform.model =
Matrix4::from_axis_angle(Vector3::new(1.0, 0.0, 0.0), Deg(0.110) * delta_time)
* uniform_transform.model;
let uni_transform_slice = [uniform_transform.clone()];
let buffer_size = (std::mem::size_of::<UniformBufferObject>() * uni_transform_slice.len()) as u64;
let data_ptr =
device.map_memory(
ubo_mems[current_image],
0,
buffer_size,
vk::MemoryMapFlags::empty(),
).expect("Failed to map memory.") as *mut UniformBufferObject;
data_ptr.copy_from_nonoverlapping(uni_transform_slice.as_ptr(), uni_transform_slice.len());
device.unmap_memory(ubo_mems[current_image]);
}
unsafe fn create_buffer
(
device: &DeviceLoader,
// flags: vk::BufferCreateFlags,
size: vk::DeviceSize,
usage: vk::BufferUsageFlags,
memory_type_index: u32,
// queue_family_indices: &[u32],
)
-> (vk::Buffer, vk::DeviceMemory) {
let buffer_create_info = vk::BufferCreateInfoBuilder::new()
// .flags(&[])
.size(size)
.usage(usage)
.sharing_mode(vk::SharingMode::EXCLUSIVE)
.queue_family_indices(&[0]);
let buffer = device.create_buffer(&buffer_create_info, None)
.expect("Failed to create buffer.");
let mem_reqs = device.get_buffer_memory_requirements(buffer);
let allocate_info = vk::MemoryAllocateInfoBuilder::new()
.allocation_size(mem_reqs.size)
.memory_type_index(memory_type_index);
let buffer_memory = device
.allocate_memory(&allocate_info, None)
.expect("Failed to allocate memory for buffer.");
device.bind_buffer_memory(buffer, buffer_memory, 0)
.expect("Failed to bind buffer.");
(buffer, buffer_memory)
}
fn load_model
<'a>
()
-> Result<(Vec<VertexV3>, Vec<u32>), &'a str>
{
let model_path: & str = "assets/terrain__002__.obj";
let (models, materials) = tobj::load_obj(&model_path, &tobj::LoadOptions::default()).expect("Failed to load model object!");
let model = models[0].clone();
let materials = materials.unwrap();
let material = materials.clone().into_iter().nth(0).unwrap();
let mut vertices_terr = vec![];
let mesh = model.mesh;
let total_vertices_count = mesh.positions.len() / 3;
for i in 0..total_vertices_count {
let vertex = VertexV3 {
pos: [
mesh.positions[i * 3],
mesh.positions[i * 3 + 1],
mesh.positions[i * 3 + 2],
1.0,
],
color: [0.8, 0.20, 0.30, 0.40],
};
vertices_terr.push(vertex);
};
let mut indices_terr_full = mesh.indices.clone();
let mut indices_terr = vec![];
for i in 0..(indices_terr_full.len() / 2) {
indices_terr.push(indices_terr_full[i]);
}
Ok((vertices_terr, indices_terr))
}
// let model_path: &'static str = "assets/terrain__002__.obj";
// let (models, materials) = tobj::load_obj(&model_path, &tobj::LoadOptions::default()).expect("Failed to load model object!");
// let model = models[0].clone();
// let materials = materials.unwrap();
// let material = materials.clone().into_iter().nth(0).unwrap();
// let mut vertices_terr = vec![];
// let mesh = model.mesh;
// let total_vertices_count = mesh.positions.len() / 3;
// for i in 0..total_vertices_count {
// let vertex = VertexV3 {
// pos: [
// mesh.positions[i * 3],
// mesh.positions[i * 3 + 1],
// mesh.positions[i * 3 + 2],
// 1.0,
// ],
// color: [0.8, 0.20, 0.30, 0.40],
// };
// vertices_terr.push(vertex);
// };
// let mut indices_terr = mesh.indices.clone();
unsafe fn create_precursors
<'a>
(
instance: &InstanceLoader,
surface: vk::SurfaceKHR,
)
-> Result<(vk::PhysicalDevice, u32, vk::SurfaceFormatKHR, vk::PresentModeKHR, vk::PhysicalDeviceProperties), &'a str>
{
let device_extensions = vec![ // trying to type the contents of this vector to be able to pass between functions..todo.
vk::KHR_SWAPCHAIN_EXTENSION_NAME,
vk::KHR_RAY_TRACING_PIPELINE_EXTENSION_NAME,
vk::KHR_RAY_QUERY_EXTENSION_NAME,
vk::KHR_DEFERRED_HOST_OPERATIONS_EXTENSION_NAME,
vk::KHR_ACCELERATION_STRUCTURE_EXTENSION_NAME,
vk::KHR_SHADER_FLOAT_CONTROLS_EXTENSION_NAME,
vk::KHR_SPIRV_1_4_EXTENSION_NAME,
vk::KHR_BUFFER_DEVICE_ADDRESS_EXTENSION_NAME,
vk::EXT_DESCRIPTOR_INDEXING_EXTENSION_NAME,
];
let (physical_device, queue_family, format, present_mode, device_properties) =
instance.enumerate_physical_devices(None)
.unwrap()
.into_iter()
.filter_map(|physical_device| {
let queue_family = match instance
.get_physical_device_queue_family_properties(physical_device, None)
.into_iter()
.enumerate()
.position(|(i, queue_family_properties)| {
queue_family_properties
.queue_flags
.contains(vk::QueueFlags::GRAPHICS)
// (
// .contains(vk::QueueFlags::GRAPHICS)
// && .contains(vk::QueueFlags::TRANSFER)
// )
// .contains(vk::QueueFlags::TRANSFER)
&& instance
.get_physical_device_surface_support_khr(
physical_device,
i as u32,
surface,
)
.unwrap()
}) {
Some(queue_family) => queue_family as u32,
None => return None,
};
let formats = instance
.get_physical_device_surface_formats_khr(physical_device, surface, None)
.unwrap();
let format = match formats
.iter()
.find(|surface_format| {
(surface_format.format == vk::Format::B8G8R8A8_SRGB
|| surface_format.format == vk::Format::R8G8B8A8_SRGB)
&& surface_format.color_space == vk::ColorSpaceKHR::SRGB_NONLINEAR_KHR
})
.or_else(|| formats.get(0))
{
Some(surface_format) => *surface_format,
None => return None,
};
let present_mode = instance
.get_physical_device_surface_present_modes_khr(physical_device, surface, None)
.unwrap()
.into_iter()
.find(|present_mode| present_mode == &vk::PresentModeKHR::MAILBOX_KHR)
.unwrap_or(vk::PresentModeKHR::FIFO_KHR);
let supported_device_extensions = instance
.enumerate_device_extension_properties(physical_device, None, None)
.unwrap();
let device_extensions_supported =
device_extensions.iter().all(|device_extension| {
let device_extension = CStr::from_ptr(*device_extension);
supported_device_extensions.iter().any(|properties| {
CStr::from_ptr(properties.extension_name.as_ptr()) == device_extension
})
});
if !device_extensions_supported {
return None;
}
let device_properties = instance.get_physical_device_properties(physical_device);
Some((
physical_device,
queue_family,
format,
present_mode,
device_properties,
))
})
.max_by_key(|(_, _, _, _, properties)| match properties.device_type {
vk::PhysicalDeviceType::DISCRETE_GPU => 2,
vk::PhysicalDeviceType::INTEGRATED_GPU => 1,
_ => 0,
})
.expect("No suitable physical device found");
Ok((physical_device, queue_family, format, present_mode, device_properties))
}
unsafe fn buffer_indices
<'a>
(
device: &DeviceLoader,
queue: vk::Queue,
command_pool: vk::CommandPool,
vertices: &mut Vec<VertexV3>,
indices: &mut Vec<u32>,
)
-> Result<(vk::Buffer), &'a str>
{
// let vb_size = ((::std::mem::size_of_val(&(3.14 as f32))) * 9 * vertices_terr.len()) as vk::DeviceSize;
let ib_size = (::std::mem::size_of_val(&(10 as u32)) * indices.len()) as vk::DeviceSize;
let info = vk::BufferCreateInfoBuilder::new()
.size(ib_size)
.usage(vk::BufferUsageFlags::TRANSFER_SRC)
.sharing_mode(vk::SharingMode::EXCLUSIVE);
let sb = device.create_buffer(&info, None).expect("Failed to create a staging buffer.");
let mem_reqs = device.get_buffer_memory_requirements(sb);
let info = vk::MemoryAllocateInfoBuilder::new()
.allocation_size(mem_reqs.size)
.memory_type_index(2);
let sb_mem = device.allocate_memory(&info, None).unwrap();
device.bind_buffer_memory(sb, sb_mem, 0).unwrap();
let data_ptr = device.map_memory(
sb_mem,
0,
vk::WHOLE_SIZE,
vk::MemoryMapFlags::empty(),
).unwrap() as *mut u32;
data_ptr.copy_from_nonoverlapping(indices.as_ptr(), indices.len());
device.unmap_memory(sb_mem);
// Todo: add destruction if this is still working
let info = vk::BufferCreateInfoBuilder::new()
.size(ib_size)
.usage(vk::BufferUsageFlags::TRANSFER_DST | vk::BufferUsageFlags::INDEX_BUFFER)
.sharing_mode(vk::SharingMode::EXCLUSIVE);
let ib = device.create_buffer(&info, None)
.expect("Failed to create index buffer.");
let mem_reqs = device.get_buffer_memory_requirements(ib);
let alloc_info = vk::MemoryAllocateInfoBuilder::new()
.allocation_size(mem_reqs.size)
.memory_type_index(1);
let ib_mem = device.allocate_memory(&alloc_info, None).unwrap();
device.bind_buffer_memory(ib, ib_mem, 0);
let info = vk::CommandBufferAllocateInfoBuilder::new()
.command_pool(command_pool)
.level(vk::CommandBufferLevel::PRIMARY)
.command_buffer_count(1);
let cb = device.allocate_command_buffers(&info).unwrap()[0];
let info = vk::CommandBufferBeginInfoBuilder::new()
.flags(vk::CommandBufferUsageFlags::ONE_TIME_SUBMIT);
device.begin_command_buffer(cb, &info).expect("Failed begin_command_buffer.");
let info = vk::BufferCopyBuilder::new()
.src_offset(0)
.dst_offset(0)
.size(ib_size);
device.cmd_copy_buffer(cb, sb, ib, &[info]);
let slice = &[cb];
device.end_command_buffer(cb).expect("Failed to end command buffer.");
let info = vk::SubmitInfoBuilder::new()
.wait_semaphores(&[])
.command_buffers(slice)
.signal_semaphores(&[]);
device.queue_submit(queue, &[info], vk::Fence::null()).expect("Failed to queue submit.");
Ok((ib))
}
// let info = vk::BufferCreateInfoBuilder::new()
// .size(ib_size)
// .usage(vk::BufferUsageFlags::TRANSFER_SRC)
// .sharing_mode(vk::SharingMode::EXCLUSIVE);
// let sb = device.create_buffer(&info, None).expect("Failed to create a staging buffer.");
// let mem_reqs = device.get_buffer_memory_requirements(sb);
// let info = vk::MemoryAllocateInfoBuilder::new()
// .allocation_size(mem_reqs.size)
// .memory_type_index(2);
// let sb_mem = device.allocate_memory(&info, None).unwrap();
// device.bind_buffer_memory(sb, sb_mem, 0).unwrap();
// let data_ptr = device.map_memory(
// sb_mem,
// 0,
// vk::WHOLE_SIZE,
// vk::MemoryMapFlags::empty(),
// ).unwrap() as *mut u32;
// data_ptr.copy_from_nonoverlapping(indices_terr.as_ptr(), indices_terr.len());
// device.unmap_memory(sb_mem);
// // Todo: add destruction if this is still working
// let info = vk::BufferCreateInfoBuilder::new()
// .size(ib_size)
// .usage(vk::BufferUsageFlags::TRANSFER_DST | vk::BufferUsageFlags::INDEX_BUFFER)
// .sharing_mode(vk::SharingMode::EXCLUSIVE);
// let ib = device.create_buffer(&info, None)
// .expect("Failed to create index buffer.");
// let mem_reqs = device.get_buffer_memory_requirements(ib);
// let alloc_info = vk::MemoryAllocateInfoBuilder::new()
// .allocation_size(mem_reqs.size)
// .memory_type_index(1);
// let ib_mem = device.allocate_memory(&alloc_info, None).unwrap();
// device.bind_buffer_memory(ib, ib_mem, 0);
// let info = vk::CommandBufferAllocateInfoBuilder::new()
// .command_pool(command_pool)
// .level(vk::CommandBufferLevel::PRIMARY)
// .command_buffer_count(1);
// let cb = device.allocate_command_buffers(&info).unwrap()[0];
// let info = vk::CommandBufferBeginInfoBuilder::new()
// .flags(vk::CommandBufferUsageFlags::ONE_TIME_SUBMIT);
// device.begin_command_buffer(cb, &info).expect("Failed begin_command_buffer.");
// let info = vk::BufferCopyBuilder::new()
// .src_offset(0)
// .dst_offset(0)
// .size(ib_size);
// device.cmd_copy_buffer(cb, sb, ib, &[info]);
// let slice = &[cb];
// device.end_command_buffer(cb).expect("Failed to end command buffer.");
// let info = vk::SubmitInfoBuilder::new()
// .wait_semaphores(&[])
// .command_buffers(slice)
// .signal_semaphores(&[]);
// device.queue_submit(queue, &[info], vk::Fence::null()).expect("Failed to queue submit.");
// Vulkan Scratch Routine 4500 ----------------- -- -- -- ---- -
use erupt::{
cstr,
utils::{self, surface},
vk, DeviceLoader, EntryLoader, InstanceLoader,
vk::{Device, MemoryMapFlags},
};
use cgmath::{Deg, Rad, Matrix4, Point3, Vector3, Vector4};
use std::{
ffi::{c_void, CStr, CString},
fs,
fs::{write, OpenOptions},
io::{stdin, stdout, Write},
io::prelude::*,
mem::*,
os::raw::c_char,
ptr,
result::Result,
result::Result::*,
string::String,
sync::Arc,
thread,
time,
};
use libc;
use smallvec::SmallVec;
use raw_window_handle::{HasRawWindowHandle, RawWindowHandle};
use memoffset::offset_of;
use simple_logger::SimpleLogger;
use winit::{
dpi::PhysicalSize,
event::{
Event, KeyboardInput, WindowEvent,
ElementState, StartCause, VirtualKeyCode,
DeviceEvent,
},
monitor::{MonitorHandle, VideoMode},
event_loop::{ControlFlow, EventLoop},
window::{Fullscreen, Window, WindowBuilder}
};
use structopt::StructOpt;
const TITLE: &str = "vulkan-routine-4500";
const FRAMES_IN_FLIGHT: usize = 2;
const LAYER_KHRONOS_VALIDATION: *const c_char = cstr!("VK_LAYER_KHRONOS_validation");
const SHADER_VERT: &[u8] = include_bytes!("../spv/s_300__.vert.spv");
const SHADER_FRAG: &[u8] = include_bytes!("../spv/s1.frag.spv");
#[repr(C)]
#[derive(Debug, Clone, Copy)]
struct FrameData {
present_semaphore: vk::Semaphore,
render_semaphore: vk::Semaphore,
command_pool: vk::CommandPool,
command_buffer: vk::CommandBuffer,
}
#[repr(C)]
#[derive(Debug, Clone, Copy)]
struct VertexV3 {
pos: [f32; 4],
color: [f32; 4],
}
#[repr(C)]
#[derive(Clone, Debug, Copy)]
struct PushConstants {
model: Matrix4<f32>,
view: Matrix4<f32>,
proj: Matrix4<f32>,
}
#[repr(C)]
#[derive(Clone, Debug, Copy)]
struct UniformBufferObject {
model: Matrix4<f32>,
view: Matrix4<f32>,
proj: Matrix4<f32>,
}
#[derive(Debug, StructOpt)]
struct Opt {
#[structopt(short, long)]
validation_layers: bool,
}
struct App {
entry: Arc<erupt::EntryLoader>,
instance: Arc<erupt::InstanceLoader>,
data: AppData,
device: Arc<erupt::DeviceLoader>,
frame: usize,
resized: bool,
start: std::time::Instant,
models: usize,
}
impl App {
unsafe fn create(window: &Window) -> Result<Self, &str> {
// create entry
// create data
// create instance -- mutate data?
// put surface on data
let opt = Opt { validation_layers: true };
let entry = Arc::new(EntryLoader::new().unwrap());
println!(
"\n\n{} - Vulkan Instance {}.{}.{}",
TITLE,
vk::api_version_major(entry.instance_version()),
vk::api_version_minor(entry.instance_version()),
vk::api_version_patch(entry.instance_version())
);
let mut data = AppData::default();
let instance = create_instance(window, &entry, &mut data).unwrap();
let surface = unsafe { surface::create_surface(&instance, &window, None) }.unwrap();
data.surface = surface;
let event_loop = EventLoop::new();
let fullscreen = Fullscreen::Exclusive(prompt_for_video_mode(&prompt_for_monitor(&event_loop)));
let application_name = CString::new("Vulkan-Routine-4500").unwrap();
let engine_name = CString::new("No Engine").unwrap();
let app_info = vk::ApplicationInfoBuilder::new()
.application_name(&application_name)
.application_version(vk::make_api_version(0, 1, 0, 0))
.engine_name(&engine_name)
.engine_version(vk::make_api_version(0, 1, 0, 0))
.api_version(vk::make_api_version(0, 1, 0, 0));
let device_extensions = vec![
vk::KHR_SWAPCHAIN_EXTENSION_NAME,
vk::KHR_RAY_TRACING_PIPELINE_EXTENSION_NAME,
vk::KHR_RAY_QUERY_EXTENSION_NAME,
vk::KHR_DEFERRED_HOST_OPERATIONS_EXTENSION_NAME,
vk::KHR_ACCELERATION_STRUCTURE_EXTENSION_NAME,
vk::KHR_SHADER_FLOAT_CONTROLS_EXTENSION_NAME,
vk::KHR_SPIRV_1_4_EXTENSION_NAME,
vk::KHR_BUFFER_DEVICE_ADDRESS_EXTENSION_NAME,
vk::EXT_DESCRIPTOR_INDEXING_EXTENSION_NAME,
];
let mut device_layers = Vec::new();
if opt.validation_layers {
println!("\nPushing to device_layers from validation.");
device_layers.push(LAYER_KHRONOS_VALIDATION);
}
let messenger = if opt.validation_layers {
let messenger_info = vk::DebugUtilsMessengerCreateInfoEXTBuilder::new()
.message_severity(
vk::DebugUtilsMessageSeverityFlagsEXT::VERBOSE_EXT
| vk::DebugUtilsMessageSeverityFlagsEXT::WARNING_EXT
| vk::DebugUtilsMessageSeverityFlagsEXT::ERROR_EXT,
)
.message_type(
vk::DebugUtilsMessageTypeFlagsEXT::GENERAL_EXT
| vk::DebugUtilsMessageTypeFlagsEXT::VALIDATION_EXT
| vk::DebugUtilsMessageTypeFlagsEXT::PERFORMANCE_EXT,
)
.pfn_user_callback(Some(debug_callback));
unsafe { instance.create_debug_utils_messenger_ext(&messenger_info, None) }.unwrap()
} else {
Default::default()
};
let (physical_device, queue_family, format, present_mode, device_properties) =
unsafe { instance.enumerate_physical_devices(None) }
.unwrap()
.into_iter()
.filter_map(|physical_device| unsafe {
let queue_family = match instance
.get_physical_device_queue_family_properties(physical_device, None)
.into_iter()
.enumerate()
.position(|(i, queue_family_properties)| {
queue_family_properties
.queue_flags
.contains(vk::QueueFlags::GRAPHICS)
// (
// .contains(vk::QueueFlags::GRAPHICS)
// && .contains(vk::QueueFlags::TRANSFER)
// )
// .contains(vk::QueueFlags::TRANSFER)
&& instance
.get_physical_device_surface_support_khr(
physical_device,
i as u32,
surface,
)
.unwrap()
}) {
Some(queue_family) => queue_family as u32,
None => return None,
};
let formats = instance
.get_physical_device_surface_formats_khr(physical_device, surface, None)
.unwrap();
let format = match formats
.iter()
.find(|surface_format| {
(surface_format.format == vk::Format::B8G8R8A8_SRGB
|| surface_format.format == vk::Format::R8G8B8A8_SRGB)
&& surface_format.color_space == vk::ColorSpaceKHR::SRGB_NONLINEAR_KHR
})
.or_else(|| formats.get(0))
{
Some(surface_format) => *surface_format,
None => return None,
};
let present_mode = instance
.get_physical_device_surface_present_modes_khr(physical_device, surface, None)
.unwrap()
.into_iter()
.find(|present_mode| present_mode == &vk::PresentModeKHR::MAILBOX_KHR)
.unwrap_or(vk::PresentModeKHR::FIFO_KHR);
let supported_device_extensions = instance
.enumerate_device_extension_properties(physical_device, None, None)
.unwrap();
let device_extensions_supported =
device_extensions.iter().all(|device_extension| {
let device_extension = CStr::from_ptr(*device_extension);
supported_device_extensions.iter().any(|properties| {
CStr::from_ptr(properties.extension_name.as_ptr()) == device_extension
})
});
if !device_extensions_supported {
return None;
}
let device_properties = instance.get_physical_device_properties(physical_device);
Some((
physical_device,
queue_family,
format,
present_mode,
device_properties,
))
})
.max_by_key(|(_, _, _, _, properties)| match properties.device_type {
vk::PhysicalDeviceType::DISCRETE_GPU => 2,
vk::PhysicalDeviceType::INTEGRATED_GPU => 1,
_ => 0,
})
.expect("No suitable physical device found");
println!("Using physical device: {:?}", unsafe {
CStr::from_ptr(device_properties.device_name.as_ptr())
});
let queue_info = vec![vk::DeviceQueueCreateInfoBuilder::new()
.queue_family_index(queue_family)
.queue_priorities(&[1.0])];
let features = vk::PhysicalDeviceFeaturesBuilder::new()
.fill_mode_non_solid(true)
.depth_clamp(true);
let device_info = vk::DeviceCreateInfoBuilder::new()
.queue_create_infos(&queue_info)
.enabled_features(&features)
.enabled_extension_names(&device_extensions)
.enabled_layer_names(&device_layers);
let device =
Arc::new(unsafe { DeviceLoader::new(&instance, physical_device, &device_info) }.unwrap());
Ok(Self {
entry,
instance,
data,
device,
frame: 0,
resized: false,
start: std::time::Instant::now(),
models: 1,
})
}
}
#[derive(Clone, Debug, Default)]
struct AppData {
messenger: vk::DebugUtilsMessengerEXT,
surface: vk::SurfaceKHR,
physical_device: vk::PhysicalDevice,
msaa_samples: vk::SampleCountFlags,
graphics_queue: vk::Queue,
present_queue: vk::Queue,
swapchain_format: vk::Format,
swapchain_extent: vk::Extent2D,
swapchain: vk::SwapchainKHR,
swapchain_images: Vec<vk::Image>,
swapchain_image_views: Vec<vk::ImageView>,
render_pass: vk::RenderPass,
descriptor_set_layout: vk::DescriptorSetLayout,
pipeline_layout: vk::PipelineLayout,
pipeline: vk::Pipeline,
framebuffers: Vec<vk::Framebuffer>,
color_image: vk::Image,
color_image_memory: vk::DeviceMemory,
color_image_view: vk::ImageView,
depth_image: vk::Image,
depth_image_memory: vk::DeviceMemory,
depth_image_view: vk::ImageView,
mip_levels: u32,
vertices: Vec<VertexV3>,
indices: Vec<u32>,
vertex_buffer: vk::Buffer,
vertex_buffer_memory: vk::DeviceMemory,
index_buffer: vk::Buffer,
index_buffer_memory: vk::DeviceMemory,
uniform_buffers: Vec<vk::Buffer>,
uniform_buffers_memory: Vec<vk::DeviceMemory>,
descriptor_pool: vk::DescriptorPool,
descriptor_sets: Vec<vk::DescriptorSet>,
image_available_semaphores: Vec<vk::Semaphore>,
render_finished_semaphores: Vec<vk::Semaphore>,
in_flight_fences: Vec<vk::Fence>,
images_in_flight: Vec<vk::Fence>,
command_pool: vk::CommandPool,
command_pools: Vec<vk::CommandPool>,
command_buffers: Vec<vk::CommandBuffer>,
}
// static mut log_300: Vec<String> = vec!();
unsafe extern "system" fn debug_callback(
_message_severity: vk::DebugUtilsMessageSeverityFlagBitsEXT,
_message_types: vk::DebugUtilsMessageTypeFlagsEXT,
p_callback_data: *const vk::DebugUtilsMessengerCallbackDataEXT,
_p_user_data: *mut c_void,
) -> vk::Bool32 {
let str_99 = String::from(CStr::from_ptr((*p_callback_data).p_message).to_string_lossy());
// log_300.push(str_99 );
eprintln!(
"{}",
CStr::from_ptr((*p_callback_data).p_message).to_string_lossy()
);
vk::FALSE
}
fn update_uniform_buffer(
device: &DeviceLoader,
uniform_transform: &mut UniformBufferObject,
ubo_mems: &mut Vec<vk::DeviceMemory>,
ubos: &mut Vec<vk::Buffer>,
current_image: usize,
delta_time: f32
)
{
uniform_transform.model =
Matrix4::from_axis_angle(Vector3::new(1.0, 0.0, 0.0), Deg(0.330) * delta_time)
* uniform_transform.model;
let uni_transform_slice = [uniform_transform.clone()];
let buffer_size = (std::mem::size_of::<UniformBufferObject>() * uni_transform_slice.len()) as u64;
unsafe {
let data_ptr =
device.map_memory(
ubo_mems[current_image],
0,
buffer_size,
vk::MemoryMapFlags::empty(),
).expect("Failed to map memory.") as *mut UniformBufferObject;
data_ptr.copy_from_nonoverlapping(uni_transform_slice.as_ptr(), uni_transform_slice.len());
device.unmap_memory(ubo_mems[current_image]);
}
}
fn create_buffer(
device: &DeviceLoader,
// flags: vk::BufferCreateFlags,
size: vk::DeviceSize,
usage: vk::BufferUsageFlags,
memory_type_index: u32,
// queue_family_indices: &[u32],
) -> (vk::Buffer, vk::DeviceMemory) {
let buffer_create_info = vk::BufferCreateInfoBuilder::new()
// .flags(&[])
.size(size)
.usage(usage)
.sharing_mode(vk::SharingMode::EXCLUSIVE)
.queue_family_indices(&[0]);
let buffer = unsafe {
device.create_buffer(&buffer_create_info, None)
.expect("Failed to create buffer.")
};
let mem_reqs = unsafe { device.get_buffer_memory_requirements(buffer) };
let allocate_info = vk::MemoryAllocateInfoBuilder::new()
.allocation_size(mem_reqs.size)
.memory_type_index(memory_type_index);
let buffer_memory = unsafe {
device
.allocate_memory(&allocate_info, None)
.expect("Failed to allocate memory for buffer.")
};
unsafe {
device
.bind_buffer_memory(buffer, buffer_memory, 0)
.expect("Failed to bind buffer.")
};
(buffer, buffer_memory)
}
pub fn vulkan_routine_4500() {
let opt = Opt { validation_layers: true };
println!("Use validation layers: {}", opt.validation_layers);
let event_loop = EventLoop::new();
// let fullscreen = Some(match num {
// 1 => Fullscreen::Exclusive(prompt_for_video_mode(&prompt_for_monitor(&event_loop))),
// 2 => Fullscreen::Borderless(Some(prompt_for_monitor(&event_loop))),
// _ => panic!("Please enter a valid number"),
// });
let fullscreen = Fullscreen::Exclusive(prompt_for_video_mode(&prompt_for_monitor(&event_loop)));
let window = WindowBuilder::new()
.with_title(TITLE)
// .with_fullscreen(Some(fullscreen.clone()))
// .with_inner_size(winit::dpi::PhysicalSize::new(3840, 2160))
.with_maximized(true)
.with_resizable(false)
.build(&event_loop)
.unwrap();
let entry = Arc::new(EntryLoader::new().unwrap());
println!(
"\n\n{} - Vulkan Instance {}.{}.{}",
TITLE,
vk::api_version_major(entry.instance_version()),
vk::api_version_minor(entry.instance_version()),
vk::api_version_patch(entry.instance_version())
);
let application_name = CString::new("Vulkan-Routine-1200").unwrap();
let engine_name = CString::new("No Engine").unwrap();
let app_info = vk::ApplicationInfoBuilder::new()
.application_name(&application_name)
.application_version(vk::make_api_version(0, 1, 0, 0))
.engine_name(&engine_name)
.engine_version(vk::make_api_version(0, 1, 0, 0))
.api_version(vk::make_api_version(0, 1, 0, 0));
let mut instance_extensions = surface::enumerate_required_extensions(&window).unwrap();
if opt.validation_layers {
println!("\nPushing to instance extensions from validations.");
instance_extensions.push(vk::EXT_DEBUG_UTILS_EXTENSION_NAME);
}
let mut instance_layers = Vec::new();
if opt.validation_layers {
println!("\nPushing to instance layers from validation.");
instance_layers.push(LAYER_KHRONOS_VALIDATION);
}
let device_extensions = vec![
vk::KHR_SWAPCHAIN_EXTENSION_NAME,
vk::KHR_RAY_TRACING_PIPELINE_EXTENSION_NAME,
vk::KHR_RAY_QUERY_EXTENSION_NAME,
vk::KHR_DEFERRED_HOST_OPERATIONS_EXTENSION_NAME,
vk::KHR_ACCELERATION_STRUCTURE_EXTENSION_NAME,
vk::KHR_SHADER_FLOAT_CONTROLS_EXTENSION_NAME,
vk::KHR_SPIRV_1_4_EXTENSION_NAME,
vk::KHR_BUFFER_DEVICE_ADDRESS_EXTENSION_NAME,
vk::EXT_DESCRIPTOR_INDEXING_EXTENSION_NAME,
];
let mut device_layers = Vec::new();
if opt.validation_layers {
println!("\nPushing to device_layers from validation.");
device_layers.push(LAYER_KHRONOS_VALIDATION);
}
let instance_info = vk::InstanceCreateInfoBuilder::new()
.application_info(&app_info)
.enabled_extension_names(&instance_extensions)
.enabled_layer_names(&instance_layers);
let instance = Arc::new(unsafe { InstanceLoader::new(&entry, &instance_info) }.unwrap());
let messenger = if opt.validation_layers {
let messenger_info = vk::DebugUtilsMessengerCreateInfoEXTBuilder::new()
.message_severity(
vk::DebugUtilsMessageSeverityFlagsEXT::VERBOSE_EXT
| vk::DebugUtilsMessageSeverityFlagsEXT::WARNING_EXT
| vk::DebugUtilsMessageSeverityFlagsEXT::ERROR_EXT,
)
.message_type(
vk::DebugUtilsMessageTypeFlagsEXT::GENERAL_EXT
| vk::DebugUtilsMessageTypeFlagsEXT::VALIDATION_EXT
| vk::DebugUtilsMessageTypeFlagsEXT::PERFORMANCE_EXT,
)
.pfn_user_callback(Some(debug_callback));
unsafe { instance.create_debug_utils_messenger_ext(&messenger_info, None) }.unwrap()
} else {
Default::default()
};
let surface = unsafe { surface::create_surface(&instance, &window, None) }.unwrap();
let (physical_device, queue_family, format, present_mode, device_properties) =
unsafe { instance.enumerate_physical_devices(None) }
.unwrap()
.into_iter()
.filter_map(|physical_device| unsafe {
let queue_family = match instance
.get_physical_device_queue_family_properties(physical_device, None)
.into_iter()
.enumerate()
.position(|(i, queue_family_properties)| {
queue_family_properties
.queue_flags
.contains(vk::QueueFlags::GRAPHICS)
// (
// .contains(vk::QueueFlags::GRAPHICS)
// && .contains(vk::QueueFlags::TRANSFER)
// )
// .contains(vk::QueueFlags::TRANSFER)
&& instance
.get_physical_device_surface_support_khr(
physical_device,
i as u32,
surface,
)
.unwrap()
}) {
Some(queue_family) => queue_family as u32,
None => return None,
};
let formats = instance
.get_physical_device_surface_formats_khr(physical_device, surface, None)
.unwrap();
let format = match formats
.iter()
.find(|surface_format| {
(surface_format.format == vk::Format::B8G8R8A8_SRGB
|| surface_format.format == vk::Format::R8G8B8A8_SRGB)
&& surface_format.color_space == vk::ColorSpaceKHR::SRGB_NONLINEAR_KHR
})
.or_else(|| formats.get(0))
{
Some(surface_format) => *surface_format,
None => return None,
};
let present_mode = instance
.get_physical_device_surface_present_modes_khr(physical_device, surface, None)
.unwrap()
.into_iter()
.find(|present_mode| present_mode == &vk::PresentModeKHR::MAILBOX_KHR)
.unwrap_or(vk::PresentModeKHR::FIFO_KHR);
let supported_device_extensions = instance
.enumerate_device_extension_properties(physical_device, None, None)
.unwrap();
let device_extensions_supported =
device_extensions.iter().all(|device_extension| {
let device_extension = CStr::from_ptr(*device_extension);
supported_device_extensions.iter().any(|properties| {
CStr::from_ptr(properties.extension_name.as_ptr()) == device_extension
})
});
if !device_extensions_supported {
return None;
}
let device_properties = instance.get_physical_device_properties(physical_device);
Some((
physical_device,
queue_family,
format,
present_mode,
device_properties,
))
})
.max_by_key(|(_, _, _, _, properties)| match properties.device_type {
vk::PhysicalDeviceType::DISCRETE_GPU => 2,
vk::PhysicalDeviceType::INTEGRATED_GPU => 1,
_ => 0,
})
.expect("No suitable physical device found");
println!("Using physical device: {:?}", unsafe {
CStr::from_ptr(device_properties.device_name.as_ptr())
});
let queue_info = vec![vk::DeviceQueueCreateInfoBuilder::new()
.queue_family_index(queue_family)
.queue_priorities(&[1.0])];
let features = vk::PhysicalDeviceFeaturesBuilder::new()
.fill_mode_non_solid(true)
.depth_clamp(true);
let device_info = vk::DeviceCreateInfoBuilder::new()
.queue_create_infos(&queue_info)
.enabled_features(&features)
.enabled_extension_names(&device_extensions)
.enabled_layer_names(&device_layers);
let device =
Arc::new(unsafe { DeviceLoader::new(&instance, physical_device, &device_info) }.unwrap());
let queue = unsafe { device.get_device_queue(queue_family, 0) };
let surface_caps =
unsafe { instance.get_physical_device_surface_capabilities_khr(physical_device, surface) }
.unwrap();
let mut image_count = surface_caps.min_image_count + 1;
if surface_caps.max_image_count > 0 && image_count > surface_caps.max_image_count {
image_count = surface_caps.max_image_count;
}
let swapchain_image_extent = match surface_caps.current_extent {
vk::Extent2D {
width: u32::MAX,
height: u32::MAX,
} => {
let PhysicalSize { width, height } = window.inner_size();
vk::Extent2D { width, height }
}
normal => normal,
};
let swapchain_info = vk::SwapchainCreateInfoKHRBuilder::new()
.surface(surface)
.min_image_count(image_count)
.image_format(format.format)
.image_color_space(format.color_space)
.image_extent(swapchain_image_extent)
.image_array_layers(1)
.image_usage(vk::ImageUsageFlags::COLOR_ATTACHMENT)
.image_sharing_mode(vk::SharingMode::EXCLUSIVE)
.pre_transform(surface_caps.current_transform)
.composite_alpha(vk::CompositeAlphaFlagBitsKHR::OPAQUE_KHR)
.present_mode(present_mode)
.clipped(true)
.old_swapchain(vk::SwapchainKHR::null());
let swapchain = unsafe { device.create_swapchain_khr(&swapchain_info, None) }.unwrap();
let swapchain_images = unsafe { device.get_swapchain_images_khr(swapchain, None) }.unwrap();
let swapchain_image_views: Vec<_> = swapchain_images
.iter()
.map(|swapchain_image| {
let image_view_info = vk::ImageViewCreateInfoBuilder::new()
.image(*swapchain_image)
.view_type(vk::ImageViewType::_2D)
.format(format.format)
.components(vk::ComponentMapping {
r: vk::ComponentSwizzle::IDENTITY,
g: vk::ComponentSwizzle::IDENTITY,
b: vk::ComponentSwizzle::IDENTITY,
a: vk::ComponentSwizzle::IDENTITY,
})
.subresource_range(
vk::ImageSubresourceRangeBuilder::new()
.aspect_mask(vk::ImageAspectFlags::COLOR)
.base_mip_level(0)
.level_count(1)
.base_array_layer(0)
.layer_count(1)
.build(),
);
unsafe { device.create_image_view(&image_view_info, None) }.unwrap()
})
.collect();
let command_pool_info = vk::CommandPoolCreateInfoBuilder::new()
.queue_family_index(queue_family)
.flags(vk::CommandPoolCreateFlags::RESET_COMMAND_BUFFER);
let command_pool = unsafe { device.create_command_pool(&command_pool_info, None) }.unwrap();
let model_path: &'static str = "assets/terrain__002__.obj";
let (models, materials) = tobj::load_obj(&model_path, &tobj::LoadOptions::default()).expect("Failed to load model object!");
let model = models[0].clone();
let materials = materials.unwrap();
let material = materials.clone().into_iter().nth(0).unwrap();
let mut vertices_terr = vec![];
// let mut indices_terr = vec![];
let mesh = model.mesh;
let total_vertices_count = mesh.positions.len() / 3;
for i in 0..total_vertices_count {
let vertex = VertexV3 {
pos: [
mesh.positions[i * 3],
mesh.positions[i * 3 + 1],
mesh.positions[i * 3 + 2],
1.0,
],
color: [0.8, 0.20, 0.30, 0.40],
};
vertices_terr.push(vertex);
};
let mut indices_terr_full = mesh.indices.clone();
let mut indices_terr = vec![];
for i in 0..(indices_terr_full.len() / 2) {
indices_terr.push(indices_terr_full[i]);
}
let h_coord = 1.0; // could be 1.0? this is the ghost/4th coord for homogeneous transforms applications. need to look up-- i think 0 or 1.
let z_delta = 0.40;
let mut vertices_200 : Vec<VertexV3> = vec![
VertexV3 {
pos: [
-0.25,
-0.25,
z_delta,
h_coord,
],
color: [0.3, 0.01, 0.6, 0.58],
},
VertexV3 {
pos: [
0.25,
-0.25,
z_delta,
h_coord,
],
color: [0.3, 0.301, 0.31, 0.38],
},
VertexV3 {
pos: [
0.25,
0.25,
z_delta,
h_coord,
],
color: [0.13, 0.91, 0.23, 0.48],
},
VertexV3 {
pos: [
-0.25,
0.25,
z_delta,
h_coord,
],
color: [0.3, 0.81, 0.1, 0.41],
},
VertexV3 {
pos: [
-0.25,
-0.25,
0.0,
h_coord,
],
color: [0.3, 0.71, 0.6, 0.28],
},
VertexV3 {
pos: [
0.25,
-0.25,
0.0,
h_coord,
],
color: [0.3, 0.301, 0.61, 0.48],
},
VertexV3 {
pos: [
0.25,
0.25,
0.0,
h_coord,
],
color: [0.13, 0.91, 0.6, 0.38],
},
VertexV3 {
pos: [
-0.25,
0.25,
0.0,
h_coord,
],
color: [0.5, 0.81, 0.6, 0.21],
},
];
let indices : Vec<u32> = vec![
4, 5, 6,
6, 4, 7,
0, 1, 2,
2, 0, 3,
0, 3, 4,
4, 3, 7,
0, 1, 4,
4, 5, 1,
3, 2, 6,
6, 7, 3,
1, 2, 5,
2, 5, 6,
];
println!("size of vertices_200 {:?}", vertices_200.len());
println!("size of indices {}", indices.len());
let physical_device_memory_properties = unsafe { instance.get_physical_device_memory_properties(physical_device) };
println!("\n physical_device_memory_properties --------93939: {:?}", physical_device_memory_properties.memory_types);
let vb_size = ((::std::mem::size_of_val(&(3.14 as f32))) * 9 * vertices_terr.len()) as vk::DeviceSize;
let ib_size = (::std::mem::size_of_val(&(10 as u32)) * indices_terr.len()) as vk::DeviceSize;
let idx_stg_buf_info = vk::BufferCreateInfoBuilder::new()
.size(ib_size)
.usage(vk::BufferUsageFlags::TRANSFER_SRC)
.sharing_mode(vk::SharingMode::EXCLUSIVE);
let stg_buf_idx = unsafe {
device.create_buffer(&idx_stg_buf_info, None).expect("Failed to create a staging buffer.")
};
let stg_buf_idx_mem_reqs = unsafe {
device.get_buffer_memory_requirements(stg_buf_idx)
};
let stg_buf_idx_mem_alloc_info = vk::MemoryAllocateInfoBuilder::new()
.allocation_size(stg_buf_idx_mem_reqs.size)
.memory_type_index(2);
let stg_buf_idx_mem = unsafe {
device.allocate_memory(&stg_buf_idx_mem_alloc_info, None)
}.unwrap();
unsafe {
device.bind_buffer_memory(stg_buf_idx, stg_buf_idx_mem, 0).unwrap();
let stg_buf_idx_data_ptr = device.map_memory(
stg_buf_idx_mem,
0,
vk::WHOLE_SIZE,
vk::MemoryMapFlags::empty(),
).unwrap() as *mut u32;
stg_buf_idx_data_ptr.copy_from_nonoverlapping(indices_terr.as_ptr(), indices_terr.len());
device.unmap_memory(stg_buf_idx_mem);
}
let idx_buf_info = vk::BufferCreateInfoBuilder::new()
.size(ib_size)
.usage(vk::BufferUsageFlags::TRANSFER_DST | vk::BufferUsageFlags::INDEX_BUFFER)
.sharing_mode(vk::SharingMode::EXCLUSIVE);
let idx_buf = unsafe {
device.create_buffer(&idx_buf_info, None)
.expect("Failed to create index buffer.")
};
let ib_mem_reqs = unsafe {
device.get_buffer_memory_requirements(idx_buf)
};
let idx_mem_alloc_info = vk::MemoryAllocateInfoBuilder::new()
.allocation_size(ib_mem_reqs.size)
.memory_type_index(1);
let ib_mem = unsafe {
device.allocate_memory(&idx_mem_alloc_info, None)
}.unwrap();
unsafe { device.bind_buffer_memory(idx_buf, ib_mem, 0) };
let cmd_82_alloc_info = vk::CommandBufferAllocateInfoBuilder::new()
.command_pool(command_pool)
.level(vk::CommandBufferLevel::PRIMARY)
.command_buffer_count(1);
let cmd_82_arr = unsafe { device.allocate_command_buffers(&cmd_82_alloc_info) }.unwrap();
let cmd_82 = cmd_82_arr[0];
let cmd_82_begin_info = vk::CommandBufferBeginInfoBuilder::new()
.flags(vk::CommandBufferUsageFlags::ONE_TIME_SUBMIT);
unsafe { device.begin_command_buffer(cmd_82_arr[0], &cmd_82_begin_info) }.unwrap();
let buf_copy_82_info = vk::BufferCopyBuilder::new()
.src_offset(0)
.dst_offset(0)
.size(ib_size);
unsafe { device.cmd_copy_buffer(cmd_82, stg_buf_idx, idx_buf, &[buf_copy_82_info]) };
unsafe { device.end_command_buffer(cmd_82_arr[0]).unwrap() };
let submit_82_info = vk::SubmitInfoBuilder::new()
.wait_semaphores(&[])
.command_buffers(&cmd_82_arr)
.signal_semaphores(&[]);
unsafe { device.queue_submit(queue, &[submit_82_info], vk::Fence::null()).unwrap() };
let staging_buffer_create_info = vk::BufferCreateInfoBuilder::new()
.size(vb_size)
.usage(vk::BufferUsageFlags::TRANSFER_SRC)
.sharing_mode(vk::SharingMode::EXCLUSIVE);
let staging_buffer = unsafe { device.create_buffer(&staging_buffer_create_info, None).expect("failed to create staging buffer") };
let staging_buffer_memory_reqs = unsafe { device.get_buffer_memory_requirements(staging_buffer) };
let staging_buffer_memory_allocate_info =
vk::MemoryAllocateInfoBuilder::new()
.allocation_size(staging_buffer_memory_reqs.size)
.memory_type_index(2);
let staging_buffer_memory = unsafe {
device.allocate_memory(&staging_buffer_memory_allocate_info, None)
}.unwrap();
unsafe {
device.bind_buffer_memory(staging_buffer, staging_buffer_memory, 0).unwrap();
let staging_data_ptr = device.map_memory(
staging_buffer_memory,
0,
vk::WHOLE_SIZE,
vk::MemoryMapFlags::empty(),
).unwrap() as *mut VertexV3;
staging_data_ptr.copy_from_nonoverlapping(vertices_terr.as_ptr(), vertices_terr.len());
device.unmap_memory(staging_buffer_memory);
};
let vertex_buffer_create_info = vk::BufferCreateInfoBuilder::new()
.size(vb_size)
.usage(vk::BufferUsageFlags::TRANSFER_DST | vk::BufferUsageFlags::VERTEX_BUFFER)
.sharing_mode(vk::SharingMode::EXCLUSIVE);
let vertex_buffer = unsafe {
device.create_buffer(&vertex_buffer_create_info, None)
.expect("Failed to ccreate vertex buffer.")
};
let vertex_buffer_memory_reqs = unsafe {
device.get_buffer_memory_requirements(vertex_buffer)
};
let vertex_buffer_memory_allocate_info =
vk::MemoryAllocateInfoBuilder::new()
.allocation_size(vertex_buffer_memory_reqs.size)
.memory_type_index(1);
let vertex_buffer_memory = unsafe {
device.allocate_memory(&vertex_buffer_memory_allocate_info, None)
}.unwrap();
unsafe {device.bind_buffer_memory(vertex_buffer, vertex_buffer_memory, 0).unwrap() };
let cmd_buf_allocate_info_2 = vk::CommandBufferAllocateInfoBuilder::new()
.command_pool(command_pool)
.level(vk::CommandBufferLevel::PRIMARY)
.command_buffer_count(1);
let cmd_bufs_2 = unsafe { device.allocate_command_buffers(&cmd_buf_allocate_info_2) }.unwrap();
let cmd_buf_2 = cmd_bufs_2[0];
let cmd_buf_2_begin_info = vk::CommandBufferBeginInfoBuilder::new()
.flags(vk::CommandBufferUsageFlags::ONE_TIME_SUBMIT);
unsafe { device.begin_command_buffer(cmd_bufs_2[0], &cmd_buf_2_begin_info) }.unwrap();
let buf_copy_info = vk::BufferCopyBuilder::new()
.src_offset(0)
.dst_offset(0)
.size(vb_size);
unsafe { device.cmd_copy_buffer(cmd_buf_2, staging_buffer, vertex_buffer, &[buf_copy_info]) };
unsafe { device.end_command_buffer(cmd_bufs_2[0]).unwrap() };
let submit_info_2 = vk::SubmitInfoBuilder::new()
.wait_semaphores(&[])
.command_buffers(&cmd_bufs_2)
.signal_semaphores(&[]);
unsafe { device.queue_submit(queue, &[submit_info_2], vk::Fence::null()).unwrap() };
let info101 = vk::DescriptorSetLayoutBindingFlagsCreateInfoBuilder::new()
.binding_flags(&[vk::DescriptorBindingFlags::empty()]);
let s33 = [vk::Sampler::null()];
let binding99 : vk::DescriptorSetLayoutBindingBuilder = vk::DescriptorSetLayoutBindingBuilder::new()
.binding(0)
.descriptor_type(vk::DescriptorType::UNIFORM_BUFFER)
.descriptor_count(1)
.stage_flags(vk::ShaderStageFlags::VERTEX)
.immutable_samplers(&s33);
let binding_slc_99 = &[binding99];
let d_set_layout_info = vk::DescriptorSetLayoutCreateInfoBuilder::new()
.flags(vk::DescriptorSetLayoutCreateFlags::empty()) // ? https://docs.rs/erupt/0.22.0+204/erupt/vk1_0/struct.DescriptorSetLayoutCreateFlags.html
.bindings(binding_slc_99);
let descriptor_set_layout = unsafe { device.create_descriptor_set_layout(&d_set_layout_info, None).unwrap() };
let ubo_size = ::std::mem::size_of::<UniformBufferObject>();
let mut uniform_buffers: Vec<vk::Buffer> = vec![];
let mut uniform_buffers_memories: Vec<vk::DeviceMemory> = vec![];
let swapchain_image_count = swapchain_images.len();
println!("\n\n----------------------------------
\n
swapchain_image_count {:?}
\n\n--------------------------------------
", swapchain_image_count);
for _ in 0..swapchain_image_count {
let (uniform_buffer, uniform_buffer_memory) = create_buffer(
&device,
ubo_size as u64,
vk::BufferUsageFlags::UNIFORM_BUFFER,
2,
);
uniform_buffers.push(uniform_buffer);
uniform_buffers_memories.push(uniform_buffer_memory);
}
let mut uniform_transform = UniformBufferObject {
model:
// Matrix4::from_translation(Vector3::new(5.0, 5.0, 5.0))
Matrix4::from_angle_y(Deg(10.0))
* Matrix4::from_nonuniform_scale(0.5, 0.5, 0.5),
view: Matrix4::look_at_rh(
Point3::new(0.40, 0.40, 0.40),
Point3::new(0.0, 0.0, 0.0),
Vector3::new(0.0, 0.0, 1.0),
),
proj: {
let mut proj = cgmath::perspective(
Deg(30.0),
swapchain_image_extent.width as f32
/ swapchain_image_extent.height as f32,
0.1,
10.0,
);
proj[1][1] = proj[1][1] * -1.0;
proj
},
};
let pool_size = vk::DescriptorPoolSizeBuilder::new()
._type(vk::DescriptorType::UNIFORM_BUFFER)
.descriptor_count(swapchain_image_count as u32);
let pool_sizes = &[pool_size];
let set_layouts = &[descriptor_set_layout];
let pool_info = vk::DescriptorPoolCreateInfoBuilder::new()
.pool_sizes(pool_sizes)
.max_sets(swapchain_image_count as u32);
let desc_pool = unsafe {
device.create_descriptor_pool(&pool_info, None)
}.unwrap();
let d_set_alloc_info = vk::DescriptorSetAllocateInfoBuilder::new()
.descriptor_pool(desc_pool)
.set_layouts(set_layouts);
let d_sets = unsafe { device.allocate_descriptor_sets(&d_set_alloc_info).expect("failed in alloc DescriptorSet") };
let ubo_size = ::std::mem::size_of::<UniformBufferObject>() as u64;
for i in 0..swapchain_image_count {
let d_buf_info = vk::DescriptorBufferInfoBuilder::new()
.buffer(uniform_buffers[i])
.offset(0)
.range(ubo_size);
let d_buf_infos = [d_buf_info];
let d_write_builder = vk::WriteDescriptorSetBuilder::new()
.dst_set(d_sets[0])
.dst_binding(0)
.dst_array_element(0)
.descriptor_type(vk::DescriptorType::UNIFORM_BUFFER)
.buffer_info(&d_buf_infos);
let d_write_sets = [d_write_builder];
unsafe { device.update_descriptor_sets(&d_write_sets, &[]) };
update_uniform_buffer(&device, &mut uniform_transform, &mut uniform_buffers_memories, &mut uniform_buffers, i as usize, 2.3);
// for every swapchain image? really?
}
// Create Depth Resources:
let depth_image_info = vk::ImageCreateInfoBuilder::new()
.flags(vk::ImageCreateFlags::empty())
.image_type(vk::ImageType::_2D)
.format(vk::Format::D32_SFLOAT)
.extent(vk::Extent3D {
width: swapchain_image_extent.width,
height: swapchain_image_extent.height,
depth: 1,
})
.mip_levels(1)
.array_layers(1)
.samples(vk::SampleCountFlagBits::_1)
.tiling(vk::ImageTiling::OPTIMAL)
.usage(vk::ImageUsageFlags::DEPTH_STENCIL_ATTACHMENT)
.sharing_mode(vk::SharingMode::EXCLUSIVE)
.queue_family_indices(&[0])
.initial_layout(vk::ImageLayout::UNDEFINED);
let depth_image = unsafe {
device.create_image(&depth_image_info, None)
.expect("Failed to create depth (texture) Image.")
};
let dpth_img_mem_reqs = unsafe { device.get_image_memory_requirements(depth_image) };
let dpth_img_mem_info = vk::MemoryAllocateInfoBuilder::new()
.memory_type_index(1)
.allocation_size(dpth_img_mem_reqs.size);
let depth_image_memory = unsafe {
device.allocate_memory(&dpth_img_mem_info, None)
.expect("Failed to alloc mem for depth image.")
};
unsafe {
device.bind_image_memory(depth_image, depth_image_memory, 0)
.expect("Failed to bind depth image memory.")
};
let depth_image_view_info = vk::ImageViewCreateInfoBuilder::new()
.flags(vk::ImageViewCreateFlags::empty())
.image(depth_image)
.view_type(vk::ImageViewType::_2D)
.format(vk::Format::D32_SFLOAT)
.components(vk::ComponentMapping {
r: vk::ComponentSwizzle::IDENTITY,
g: vk::ComponentSwizzle::IDENTITY,
b: vk::ComponentSwizzle::IDENTITY,
a: vk::ComponentSwizzle::IDENTITY,
})
.subresource_range(vk::ImageSubresourceRange {
aspect_mask: vk::ImageAspectFlags::DEPTH,
base_mip_level: 0,
level_count: 1,
base_array_layer: 0,
layer_count: 1,
});
let depth_image_view = unsafe {
device.create_image_view(&depth_image_view_info, None)
.expect("Failed to create image view.")
};
let entry_point = CString::new("main").unwrap();
let vert_decoded = utils::decode_spv(SHADER_VERT).unwrap();
let module_info = vk::ShaderModuleCreateInfoBuilder::new().code(&vert_decoded);
let shader_vert = unsafe { device.create_shader_module(&module_info, None) }.unwrap();
let frag_decoded = utils::decode_spv(SHADER_FRAG).unwrap();
let module_info = vk::ShaderModuleCreateInfoBuilder::new().code(&frag_decoded);
let shader_frag = unsafe { device.create_shader_module(&module_info, None) }.unwrap();
let shader_stages = vec![
vk::PipelineShaderStageCreateInfoBuilder::new()
.stage(vk::ShaderStageFlagBits::VERTEX)
.module(shader_vert)
.name(&entry_point),
vk::PipelineShaderStageCreateInfoBuilder::new()
.stage(vk::ShaderStageFlagBits::FRAGMENT)
.module(shader_frag)
.name(&entry_point),
];
let vertex_buffer_bindings_desc_info = vk::VertexInputBindingDescriptionBuilder::new()
.binding(0)
.stride(std::mem::size_of::<VertexV3>() as u32)
.input_rate(vk::VertexInputRate::VERTEX);
let vert_buff_att_desc_info_pos = vk::VertexInputAttributeDescriptionBuilder::new()
.location(0)
.binding(0)
.format(vk::Format::R32G32B32A32_SFLOAT)
.offset(offset_of!(VertexV3, pos) as u32,);
let vert_buff_att_desc_info_color = vk::VertexInputAttributeDescriptionBuilder::new()
.location(1)
.binding(0)
.format(vk::Format::R32G32B32A32_SFLOAT)
.offset(offset_of!(VertexV3, color) as u32,);
let vertex_input = vk::PipelineVertexInputStateCreateInfoBuilder::new()
.flags(vk::PipelineVertexInputStateCreateFlags::empty())
.vertex_binding_descriptions(&[vertex_buffer_bindings_desc_info])
.vertex_attribute_descriptions(&[vert_buff_att_desc_info_pos, vert_buff_att_desc_info_color])
.build_dangling();
let input_assembly = vk::PipelineInputAssemblyStateCreateInfoBuilder::new()
.topology(vk::PrimitiveTopology::TRIANGLE_LIST)
.primitive_restart_enable(false);
let viewports = vec![vk::ViewportBuilder::new()
.x(0.0)
.y(0.0)
.width(swapchain_image_extent.width as f32)
.height(swapchain_image_extent.height as f32)
.min_depth(0.0)
.max_depth(1.0)];
let scissors = vec![vk::Rect2DBuilder::new()
.offset(vk::Offset2D { x: 0, y: 0 })
.extent(swapchain_image_extent)];
let viewport_state = vk::PipelineViewportStateCreateInfoBuilder::new()
.viewports(&viewports)
.scissors(&scissors);
let rasterizer = vk::PipelineRasterizationStateCreateInfoBuilder::new()
.depth_clamp_enable(true)
.rasterizer_discard_enable(false)
.polygon_mode(vk::PolygonMode::LINE)
.line_width(1.0)
.cull_mode(vk::CullModeFlags::NONE)
.front_face(vk::FrontFace::COUNTER_CLOCKWISE);
let multisampling = vk::PipelineMultisampleStateCreateInfoBuilder::new()
.sample_shading_enable(false)
.rasterization_samples(vk::SampleCountFlagBits::_1);
let color_blend_attachments = vec![vk::PipelineColorBlendAttachmentStateBuilder::new()
.color_write_mask(
vk::ColorComponentFlags::R
| vk::ColorComponentFlags::G
| vk::ColorComponentFlags::B
| vk::ColorComponentFlags::A,
)
.blend_enable(false)];
let color_blending = vk::PipelineColorBlendStateCreateInfoBuilder::new()
.logic_op_enable(false)
.attachments(&color_blend_attachments);
let pipeline_stencil_info = vk::PipelineDepthStencilStateCreateInfoBuilder::new()
.depth_test_enable(false)
.depth_write_enable(true)
.depth_compare_op(vk::CompareOp::LESS)
.depth_bounds_test_enable(false)
.min_depth_bounds(0.0)
.max_depth_bounds(1.0)
.front(vk::StencilOpStateBuilder::new().build())
.back(vk::StencilOpStateBuilder::new().build());
let push_size = std::mem::size_of::<Matrix4<f32>>() as u32;
println!("\n\n\n push_size = {:?} \n\n\n", push_size);
println!("\n\n\nPush Constant max was 256? and here is std::mem::size_of::<Matrix4<f32>> as u32) : {:?} \n\n\n", std::mem::size_of::<Matrix4<f32>>() as u32);
println!("\n\n\nPush Crazy ^^^ Now lets try size of just f32) : {:?} \n\n\n", std::mem::size_of::<f32>() as u32);
let push_constants_ranges = [vk::PushConstantRangeBuilder::new()
.stage_flags(vk::ShaderStageFlags::VERTEX)
.offset(0)
.size(push_size)];
let desc_layouts_slc = &[descriptor_set_layout];
let pipeline_layout_info = vk::PipelineLayoutCreateInfoBuilder::new()
.push_constant_ranges(&push_constants_ranges)
.set_layouts(desc_layouts_slc);
let pipeline_layout =
unsafe { device.create_pipeline_layout(&pipeline_layout_info, None) }.unwrap();
let attachments = vec![
vk::AttachmentDescriptionBuilder::new()
.format(format.format)
.samples(vk::SampleCountFlagBits::_1)
.load_op(vk::AttachmentLoadOp::CLEAR)
.store_op(vk::AttachmentStoreOp::STORE)
.stencil_load_op(vk::AttachmentLoadOp::DONT_CARE)
.stencil_store_op(vk::AttachmentStoreOp::DONT_CARE)
.initial_layout(vk::ImageLayout::UNDEFINED)
.final_layout(vk::ImageLayout::PRESENT_SRC_KHR),
vk::AttachmentDescriptionBuilder::new()
.format(vk::Format::D32_SFLOAT)
.samples(vk::SampleCountFlagBits::_1)
.load_op(vk::AttachmentLoadOp::CLEAR)
.store_op(vk::AttachmentStoreOp::DONT_CARE)
.stencil_load_op(vk::AttachmentLoadOp::DONT_CARE)
.stencil_store_op(vk::AttachmentStoreOp::DONT_CARE)
.initial_layout(vk::ImageLayout::UNDEFINED)
.final_layout(vk::ImageLayout::DEPTH_STENCIL_ATTACHMENT_OPTIMAL)
];
let depth_attach_ref = vk::AttachmentReferenceBuilder::new()
.attachment(1)
.layout(vk::ImageLayout::DEPTH_STENCIL_ATTACHMENT_OPTIMAL);
let color_attachment_refs = vec![vk::AttachmentReferenceBuilder::new()
.attachment(0)
.layout(vk::ImageLayout::COLOR_ATTACHMENT_OPTIMAL)];
let subpasses = vec![vk::SubpassDescriptionBuilder::new()
.pipeline_bind_point(vk::PipelineBindPoint::GRAPHICS)
.color_attachments(&color_attachment_refs)
.depth_stencil_attachment(&depth_attach_ref)];
let dependencies = vec![vk::SubpassDependencyBuilder::new()
.src_subpass(vk::SUBPASS_EXTERNAL)
.dst_subpass(0)
.src_stage_mask(vk::PipelineStageFlags::COLOR_ATTACHMENT_OUTPUT)
.src_access_mask(vk::AccessFlags::empty())
.dst_stage_mask(vk::PipelineStageFlags::COLOR_ATTACHMENT_OUTPUT)
.dst_access_mask(vk::AccessFlags::COLOR_ATTACHMENT_WRITE)];
let render_pass_info = vk::RenderPassCreateInfoBuilder::new()
.attachments(&attachments)
.subpasses(&subpasses)
.dependencies(&dependencies);
let render_pass = unsafe { device.create_render_pass(&render_pass_info, None) }.unwrap();
let pipeline_info = vk::GraphicsPipelineCreateInfoBuilder::new()
.stages(&shader_stages)
.vertex_input_state(&vertex_input)
.input_assembly_state(&input_assembly)
.depth_stencil_state(&pipeline_stencil_info)
.viewport_state(&viewport_state)
.rasterization_state(&rasterizer)
.multisample_state(&multisampling)
.color_blend_state(&color_blending)
.layout(pipeline_layout)
.render_pass(render_pass)
.subpass(0);
let pipeline = unsafe {
device.create_graphics_pipelines(vk::PipelineCache::null(), &[pipeline_info], None)
}
.unwrap()[0];
let swapchain_framebuffers: Vec<_> = swapchain_image_views
.iter()
.map(|image_view| {
let attachments = vec![*image_view, depth_image_view];
let framebuffer_info = vk::FramebufferCreateInfoBuilder::new()
.render_pass(render_pass)
.attachments(&attachments)
.width(swapchain_image_extent.width)
.height(swapchain_image_extent.height)
.layers(1);
unsafe { device.create_framebuffer(&framebuffer_info, None) }.unwrap()
})
.collect();
let cmd_buf_allocate_info = vk::CommandBufferAllocateInfoBuilder::new()
.command_pool(command_pool)
.level(vk::CommandBufferLevel::PRIMARY)
.command_buffer_count(swapchain_framebuffers.len() as _);
let cmd_bufs = unsafe { device.allocate_command_buffers(&cmd_buf_allocate_info) }.unwrap();
let mut model_matrix: Matrix4<f32> = Matrix4::from_angle_y(Deg(10.0)) * Matrix4::from_nonuniform_scale(0.5, 0.5, 0.5);
let raw_rm: *mut libc::c_void = &mut model_matrix as *mut _ as *mut libc::c_void;
for (&cmd_buf, &framebuffer) in cmd_bufs.iter().zip(swapchain_framebuffers.iter()) {
let cmd_buf_begin_info = vk::CommandBufferBeginInfoBuilder::new();
unsafe { device.begin_command_buffer(cmd_buf, &cmd_buf_begin_info) }.unwrap();
let clear_values = vec![
vk::ClearValue {
color: vk::ClearColorValue {
float32: [0.0, 0.0, 0.0, 1.0],
},
},
vk::ClearValue {
depth_stencil: vk::ClearDepthStencilValue {
depth: 1.0,
stencil: 0,
},
},
];
let render_pass_begin_info = vk::RenderPassBeginInfoBuilder::new()
.render_pass(render_pass)
.framebuffer(framebuffer)
.render_area(vk::Rect2D {
offset: vk::Offset2D { x: 0, y: 0 },
extent: swapchain_image_extent,
})
.clear_values(&clear_values);
unsafe {
device.cmd_begin_render_pass(
cmd_buf,
&render_pass_begin_info,
vk::SubpassContents::INLINE,
);
device.cmd_push_constants(cmd_buf, pipeline_layout, vk::ShaderStageFlags::VERTEX, 0, push_size, raw_rm);
device.cmd_bind_pipeline(cmd_buf, vk::PipelineBindPoint::GRAPHICS, pipeline);
device.cmd_bind_index_buffer(cmd_buf, idx_buf, 0, vk::IndexType::UINT32);
device.cmd_bind_vertex_buffers(cmd_buf, 0, &[vertex_buffer], &[0]);
device.cmd_bind_descriptor_sets(cmd_buf, vk::PipelineBindPoint::GRAPHICS, pipeline_layout, 0, &d_sets, &[]);
device.cmd_draw_indexed(cmd_buf, (indices_terr.len()) as u32, ((indices_terr.len()) / 3) as u32, 0, 0, 0);
device.cmd_end_render_pass(cmd_buf);
device.end_command_buffer(cmd_buf).unwrap();
}
}
let semaphore_info = vk::SemaphoreCreateInfoBuilder::new();
let image_available_semaphores: Vec<_> = (0..FRAMES_IN_FLIGHT)
.map(|_| unsafe { device.create_semaphore(&semaphore_info, None) }.unwrap())
.collect();
let render_finished_semaphores: Vec<_> = (0..FRAMES_IN_FLIGHT)
.map(|_| unsafe { device.create_semaphore(&semaphore_info, None) }.unwrap())
.collect();
let fence_info = vk::FenceCreateInfoBuilder::new().flags(vk::FenceCreateFlags::SIGNALED);
let in_flight_fences: Vec<_> = (0..FRAMES_IN_FLIGHT)
.map(|_| unsafe { device.create_fence(&fence_info, None) }.unwrap())
.collect();
let mut images_in_flight: Vec<_> = swapchain_images.iter().map(|_| vk::Fence::null()).collect();
let mut counter = 0;
// let data = &val as *const u32;
// let (_, raw_rm, _) = model_matrix.as_slice().align_to::<u8>();
// let raw_rm = ptr::addr_of!(render_matrices);
let mut frame = 0;
#[allow(clippy::collapsible_match, clippy::single_match)]
event_loop.run(move |event, _, control_flow| match event {
Event::NewEvents(StartCause::Init) => {
*control_flow = ControlFlow::Poll;
}
Event::WindowEvent { event, .. } => match event {
WindowEvent::CloseRequested => *control_flow = ControlFlow::Exit,
_ => (),
},
Event::DeviceEvent { event, .. } => match event {
DeviceEvent::Key(KeyboardInput {
virtual_keycode: Some(keycode),
state,
..
}) => match (keycode, state) {
(VirtualKeyCode::Escape, ElementState::Released) => {
*control_flow = ControlFlow::Exit
}
_ => (),
},
_ => (),
},
Event::MainEventsCleared => {
unsafe {
device
.wait_for_fences(&[in_flight_fences[frame]], true, u64::MAX)
.unwrap();
}
let image_index = unsafe {
device.acquire_next_image_khr(
swapchain,
u64::MAX,
image_available_semaphores[frame],
vk::Fence::null(),
)
}
.unwrap();
// update_uniform_buffer(&device, &mut uniform_transform, &mut uniform_buffers_memories, &mut uniform_buffers, image_index as usize, 3.2);
// counter+= 1;
// if counter % 10 == 0 {
// update_uniform_buffer(&device, &mut uniform_transform, &mut uniform_buffers_memories, &mut uniform_buffers, image_index as usize, 3.2);
// }
// println!("{:?}", PushConstants);
let image_in_flight = images_in_flight[image_index as usize];
if !image_in_flight.is_null() {
unsafe { device.wait_for_fences(&[image_in_flight], true, u64::MAX) }.unwrap();
}
images_in_flight[image_index as usize] = in_flight_fences[frame];
let wait_semaphores = vec![image_available_semaphores[frame]];
let command_buffers = vec![cmd_bufs[image_index as usize]];
// let mut m_m_m : Matrix4<f32> = Matrix4::from_angle_y(Deg(30.0)) * model_matrix;
// let raw_rm: *mut libc::c_void = &mut m_m_m as *mut _ as *mut libc::c_void;
// unsafe {
// device.cmd_push_constants(cmd_bufs[image_index as usize], pipeline_layout, vk::ShaderStageFlags::VERTEX, 0, push_size, raw_rm)
// };
let signal_semaphores = vec![render_finished_semaphores[frame]];
let submit_info = vk::SubmitInfoBuilder::new()
.wait_semaphores(&wait_semaphores)
.wait_dst_stage_mask(&[vk::PipelineStageFlags::COLOR_ATTACHMENT_OUTPUT])
.command_buffers(&command_buffers)
.signal_semaphores(&signal_semaphores);
unsafe {
let in_flight_fence = in_flight_fences[frame];
device.reset_fences(&[in_flight_fence]).unwrap();
device
.queue_submit(queue, &[submit_info], in_flight_fence)
.unwrap()
}
let swapchains = vec![swapchain];
let image_indices = vec![image_index];
let present_info = vk::PresentInfoKHRBuilder::new()
.wait_semaphores(&signal_semaphores)
.swapchains(&swapchains)
.image_indices(&image_indices);
unsafe { device.queue_present_khr(queue, &present_info) }.unwrap();
frame = (frame + 1) % FRAMES_IN_FLIGHT;
}
Event::LoopDestroyed => unsafe {
device.device_wait_idle().unwrap();
for &semaphore in image_available_semaphores
.iter()
.chain(render_finished_semaphores.iter())
{
device.destroy_semaphore(semaphore, None);
}
for &fence in &in_flight_fences {
device.destroy_fence(fence, None);
}
device.destroy_command_pool(command_pool, None);
for &framebuffer in &swapchain_framebuffers {
device.destroy_framebuffer(framebuffer, None);
}
device.destroy_pipeline(pipeline, None);
device.destroy_render_pass(render_pass, None);
device.destroy_pipeline_layout(pipeline_layout, None);
device.destroy_shader_module(shader_vert, None);
device.destroy_shader_module(shader_frag, None);
for &image_view in &swapchain_image_views {
device.destroy_image_view(image_view, None);
}
device.destroy_swapchain_khr(swapchain, None);
device.destroy_device(None);
instance.destroy_surface_khr(surface, None);
if !messenger.is_null() {
instance.destroy_debug_utils_messenger_ext(messenger, None);
}
instance.destroy_instance(None);
println!("Exited cleanly");
},
_ => (),
})
}
// Enumerate monitors and prompt user to choose one
fn prompt_for_monitor(event_loop: &EventLoop<()>) -> MonitorHandle {
let monitor = event_loop
.available_monitors()
.nth(0)
.expect("Please enter a valid ID");
monitor
}
fn prompt_for_video_mode(monitor: &MonitorHandle) -> VideoMode {
let video_mode = monitor
.video_modes()
.nth(0)
.expect("Please enter a valid ID");
video_mode
}
fn recreate_swapchain() {
}
// pointless one-liner function.
// This is not a general pool creator, it's specifically for
// the transient type that are attached to a frame, and reset each loop cycle.
// so it won't take all the builder data as a parameter, it will build internally.
unsafe fn create_command_pool (
instance: &vk::Instance,
device: &erupt::DeviceLoader,
data: &AppData,
) -> Result<vk::CommandPool, &'static str> {
let info = vk::CommandPoolCreateInfoBuilder::new()
.flags(vk::CommandPoolCreateFlags::TRANSIENT)
.queue_family_index(0); // methinks we have yet one queue for now
Ok(device.create_command_pool(&info, None).unwrap())
}
unsafe fn create_command_pools(
instance: &vk::Instance,
device: &erupt::DeviceLoader,
data: &mut AppData,
) -> Result<(), &'static str> {
data.command_pool = create_command_pool(instance, device, data)?;
let num_images = data.swapchain_images.len();
for _ in 0..num_images {
let command_pool = create_command_pool(instance, device, data)?;
data.command_pools.push(command_pool);
}
Ok(())
}
// unsafe fn update_command_buffer(image_index: usize) -> Result<()> {
// let command_pool =
// }
unsafe fn create_instance(window: &Window, entry: &erupt::EntryLoader, data: &mut AppData) -> Result<Arc<erupt::InstanceLoader>, &'static str> {
let opt = Opt { validation_layers: true };
let application_name = CString::new("Vulkan-Routine-4500").unwrap();
let engine_name = CString::new("No Engine").unwrap();
let app_info = vk::ApplicationInfoBuilder::new()
.application_name(&application_name)
.application_version(vk::make_api_version(0, 1, 0, 0))
.engine_name(&engine_name)
.engine_version(vk::make_api_version(0, 1, 0, 0))
.api_version(vk::make_api_version(0, 1, 0, 0));
let mut instance_extensions = surface::enumerate_required_extensions(&window).unwrap();
if opt.validation_layers {
println!("\nPushing to instance extensions from validations.");
instance_extensions.push(vk::EXT_DEBUG_UTILS_EXTENSION_NAME);
}
let mut instance_layers = Vec::new();
if opt.validation_layers {
println!("\nPushing to instance layers from validation.");
instance_layers.push(LAYER_KHRONOS_VALIDATION);
}
let instance_info = vk::InstanceCreateInfoBuilder::new()
.application_info(&app_info)
.enabled_extension_names(&instance_extensions)
.enabled_layer_names(&instance_layers);
let instance = Arc::new(InstanceLoader::new(&entry, &instance_info).unwrap());
Ok(instance)
}
unsafe fn create_swapchain(
window: &Window,
instance: Arc<erupt::InstanceLoader>,
device: Arc<erupt::DeviceLoader>,
) {
}
// Vulkan Scratch Routine 5600 ----------------- -- -- -- ---- -
#![allow(dead_code, unused_variables, clippy::too_many_arguments, clippy::unnecessary_wraps)]
use erupt::{
cstr,
utils::{self, surface},
vk, DeviceLoader, EntryLoader, InstanceLoader,
vk::{Device, MemoryMapFlags},
};
use cgmath::{Deg, Rad, Matrix4, Point3, Vector3, Vector4};
use std::{
ffi::{c_void, CStr, CString},
fs,
fs::{write, OpenOptions},
io::{stdin, stdout, Write},
io::prelude::*,
mem::*,
os::raw::c_char,
ptr,
result::Result,
result::Result::*,
string::String,
sync::Arc,
thread,
time,
};
use libc;
use smallvec::SmallVec;
use raw_window_handle::{HasRawWindowHandle, RawWindowHandle};
use memoffset::offset_of;
use simple_logger::SimpleLogger;
use winit::{
dpi::PhysicalSize,
event::{
Event, KeyboardInput, WindowEvent,
ElementState, StartCause, VirtualKeyCode,
DeviceEvent,
},
monitor::{MonitorHandle, VideoMode},
event_loop::{ControlFlow, EventLoop},
window::{Fullscreen, Window, WindowBuilder}
};
use structopt::StructOpt;
use std::ptr::copy_nonoverlapping as memcpy;
const TITLE: &str = "vulkan-routine-4500";
const FRAMES_IN_FLIGHT: usize = 2;
const LAYER_KHRONOS_VALIDATION: *const c_char = cstr!("VK_LAYER_KHRONOS_validation");
const SHADER_VERT: &[u8] = include_bytes!("../spv/s_300__.vert.spv");
const SHADER_FRAG: &[u8] = include_bytes!("../spv/s1.frag.spv");
// Available functions:
// App struct with impl for create
// AppData struct
//
// File map
// main rotine 5600
pub fn vulkan_routine_5600() {
// let opt = Opt { validation_layers: true };
// println!("Use validation layers: {}", opt.validation_layers);
let event_loop = EventLoop::new();
let window = WindowBuilder::new()
.with_title(TITLE)
// .with_fullscreen(Some(fullscreen.clone()))
// .with_inner_size(winit::dpi::PhysicalSize::new(3840, 2160))
.with_maximized(true)
.with_resizable(false)
.build(&event_loop)
.unwrap();
let mut app = unsafe { App::create(&window).unwrap() };
// let mut destroying = false;
// let mut minimized = false;
// event_loop.run ...
}
struct App {
entry: Arc<erupt::EntryLoader>,
instance: Arc<erupt::InstanceLoader>,
data: AppData,
device: Arc<erupt::DeviceLoader>,
frame: usize,
resized: bool,
start: std::time::Instant,
models: usize,
}
impl App {
unsafe fn create
<'a>
(window: &Window)
-> Result<Self, &'a str>
{
// create entry
// create data
// create instance -- mutate data?
// put surface on data
let opt = Opt { validation_layers: true };
let entry = Arc::new(EntryLoader::new().unwrap());
println!(
"\n\n{} - Vulkan Instance {}.{}.{}",
TITLE,
vk::api_version_major(entry.instance_version()),
vk::api_version_minor(entry.instance_version()),
vk::api_version_patch(entry.instance_version())
);
let mut data = AppData::default();
let instance = create_instance(window, &entry, &mut data).unwrap();
data.surface = surface::create_surface(&instance, &window, None).unwrap();
let event_loop = EventLoop::new();
let fullscreen = Fullscreen::Exclusive(prompt_for_video_mode(&prompt_for_monitor(&event_loop)));
let application_name = CString::new("Vulkan-Routine-4500").unwrap();
let engine_name = CString::new("No Engine").unwrap();
let app_info = vk::ApplicationInfoBuilder::new()
.application_name(&application_name)
.application_version(vk::make_api_version(0, 1, 0, 0))
.engine_name(&engine_name)
.engine_version(vk::make_api_version(0, 1, 0, 0))
.api_version(vk::make_api_version(0, 1, 0, 0));
let device_extensions = vec![
vk::KHR_SWAPCHAIN_EXTENSION_NAME,
vk::KHR_RAY_TRACING_PIPELINE_EXTENSION_NAME,
vk::KHR_RAY_QUERY_EXTENSION_NAME,
vk::KHR_DEFERRED_HOST_OPERATIONS_EXTENSION_NAME,
vk::KHR_ACCELERATION_STRUCTURE_EXTENSION_NAME,
vk::KHR_SHADER_FLOAT_CONTROLS_EXTENSION_NAME,
vk::KHR_SPIRV_1_4_EXTENSION_NAME,
vk::KHR_BUFFER_DEVICE_ADDRESS_EXTENSION_NAME,
vk::EXT_DESCRIPTOR_INDEXING_EXTENSION_NAME,
];
let mut device_layers = Vec::new();
if opt.validation_layers {
println!("\nPushing to device_layers from validation.");
device_layers.push(LAYER_KHRONOS_VALIDATION);
}
let messenger = if opt.validation_layers {
let messenger_info = vk::DebugUtilsMessengerCreateInfoEXTBuilder::new()
.message_severity(
vk::DebugUtilsMessageSeverityFlagsEXT::VERBOSE_EXT
| vk::DebugUtilsMessageSeverityFlagsEXT::WARNING_EXT
| vk::DebugUtilsMessageSeverityFlagsEXT::ERROR_EXT,
)
.message_type(
vk::DebugUtilsMessageTypeFlagsEXT::GENERAL_EXT
| vk::DebugUtilsMessageTypeFlagsEXT::VALIDATION_EXT
| vk::DebugUtilsMessageTypeFlagsEXT::PERFORMANCE_EXT,
)
.pfn_user_callback(Some(debug_callback));
unsafe { instance.create_debug_utils_messenger_ext(&messenger_info, None) }.unwrap()
} else {
Default::default()
};
let (physical_device, queue_family, format, present_mode, device_properties) = instance.enumerate_physical_devices(None)
.unwrap()
.into_iter()
.filter_map(|physical_device| unsafe {
let queue_family = match instance
.get_physical_device_queue_family_properties(physical_device, None)
.into_iter()
.enumerate()
.position(|(i, queue_family_properties)| {
queue_family_properties
.queue_flags
.contains(vk::QueueFlags::GRAPHICS)
// (
// .contains(vk::QueueFlags::GRAPHICS)
// && .contains(vk::QueueFlags::TRANSFER)
// )
// .contains(vk::QueueFlags::TRANSFER)
&& instance
.get_physical_device_surface_support_khr(
physical_device,
i as u32,
surface,
)
.unwrap()
}) {
Some(queue_family) => queue_family as u32,
None => return None,
};
let formats = instance
.get_physical_device_surface_formats_khr(physical_device, surface, None)
.unwrap();
let format = match formats
.iter()
.find(|surface_format| {
(surface_format.format == vk::Format::B8G8R8A8_SRGB
|| surface_format.format == vk::Format::R8G8B8A8_SRGB)
&& surface_format.color_space == vk::ColorSpaceKHR::SRGB_NONLINEAR_KHR
})
.or_else(|| formats.get(0))
{
Some(surface_format) => *surface_format,
None => return None,
};
let present_mode = instance
.get_physical_device_surface_present_modes_khr(physical_device, surface, None)
.unwrap()
.into_iter()
.find(|present_mode| present_mode == &vk::PresentModeKHR::MAILBOX_KHR)
.unwrap_or(vk::PresentModeKHR::FIFO_KHR);
let supported_device_extensions = instance
.enumerate_device_extension_properties(physical_device, None, None)
.unwrap();
let device_extensions_supported =
device_extensions.iter().all(|device_extension| {
let device_extension = CStr::from_ptr(*device_extension);
supported_device_extensions.iter().any(|properties| {
CStr::from_ptr(properties.extension_name.as_ptr()) == device_extension
})
});
if !device_extensions_supported {
return None;
}
let device_properties = instance.get_physical_device_properties(physical_device);
Some((
physical_device,
queue_family,
format,
present_mode,
device_properties,
))
})
.max_by_key(|(_, _, _, _, properties)| match properties.device_type {
vk::PhysicalDeviceType::DISCRETE_GPU => 2,
vk::PhysicalDeviceType::INTEGRATED_GPU => 1,
_ => 0,
})
.expect("No suitable physical device found");
data.physical_device = physical_device;
data.surface_format_khr = format;
data.present_mode = present_mode;
println!("Using physical device: {:?}", unsafe {
CStr::from_ptr(device_properties.device_name.as_ptr())
});
let queue_info = vec![vk::DeviceQueueCreateInfoBuilder::new()
.queue_family_index(queue_family)
.queue_priorities(&[1.0])];
let features = vk::PhysicalDeviceFeaturesBuilder::new()
.fill_mode_non_solid(true)
.depth_clamp(true);
let device_info = vk::DeviceCreateInfoBuilder::new()
.queue_create_infos(&queue_info)
.enabled_features(&features)
.enabled_extension_names(&device_extensions)
.enabled_layer_names(&device_layers);
let device = Arc::new(DeviceLoader::new(&instance, physical_device, &device_info).unwrap());
create_swapchain
(
&window,
queue_family,
&format,
instance.clone(),
device.clone(),
&mut data
);
create_render_pass(instance.clone(), device.clone(), &mut data);
create_descriptor_set_layout(device.clone(), &mut data);
create_pipeline(device.clone(), &mut data);
create_command_pools(instance.clone(), device.clone(), &mut data);
// create_color_objects(instance, device, &mut data);
// // We don't have color_objects atm.
create_depth_objects(instance.clone(), device.clone(), &mut data);
create_framebuffers(device.clone(), &mut data);
load_model(&mut data);
create_vertex_buffer(instance.clone(), device.clone(), &mut data);
create_index_buffer(instance.clone(), device.clone(), &mut data);
create_uniform_buffers(instance.clone(), device.clone(), &mut data);
create_descriptor_pool(device.clone(), &mut data);
// // didn't we need the descriptor sets further up? Maybe someting in their pipeline changed the order relative to our configuration.
// // or maybe it was just the descriptor set layouts we did before.
create_descriptor_sets(device.clone(), &mut data);
create_command_buffers(device.clone(), &mut data);
Ok(Self {
entry,
instance,
data,
device,
frame: 0,
resized: false,
start: std::time::Instant::now(),
models: 1,
})
}
}
#[derive(Clone, Debug, Default)]
struct AppData {
present_mode: erupt::extensions::khr_surface::PresentModeKHR,
messenger: vk::DebugUtilsMessengerEXT,
surface: vk::SurfaceKHR,
physical_device: vk::PhysicalDevice,
msaa_samples: vk::SampleCountFlags,
graphics_queue: vk::Queue,
present_queue: vk::Queue,
descriptor_pool: vk::DescriptorPool,
descriptor_sets: Vec<vk::DescriptorSet>,
descriptor_set_layouts: Vec<vk::DescriptorSetLayout>,
swapchain_format: vk::Format,
swapchain_extent: vk::Extent2D,
swapchain: vk::SwapchainKHR,
swapchain_images: Vec<vk::Image>,
swapchain_image_views: Vec<vk::ImageView>,
swapchain_image_extent: vk::Extent2D,
render_pass: vk::RenderPass,
format: erupt::vk::Format,
surface_format_khr: erupt::extensions::khr_surface::SurfaceFormatKHR,
pipeline_layout: vk::PipelineLayout,
pipeline: vk::Pipeline,
framebuffers: Vec<vk::Framebuffer>,
color_image: vk::Image,
color_image_memory: vk::DeviceMemory,
color_image_view: vk::ImageView,
depth_image: vk::Image,
depth_image_memory: vk::DeviceMemory,
depth_image_view: vk::ImageView,
mip_levels: u32,
vertices: Vec<VertexV3>,
indices: Vec<u32>,
vertex_buffer: vk::Buffer,
vertex_buffer_memory: vk::DeviceMemory,
index_buffer: vk::Buffer,
index_buffer_memory: vk::DeviceMemory,
uniform_buffers: Vec<vk::Buffer>,
uniform_buffers_memory: Vec<vk::DeviceMemory>,
image_available_semaphores: Vec<vk::Semaphore>,
render_finished_semaphores: Vec<vk::Semaphore>,
in_flight_fences: Vec<vk::Fence>,
images_in_flight: Vec<vk::Fence>,
command_pool: vk::CommandPool,
command_pools: Vec<vk::CommandPool>,
command_buffers: Vec<vk::CommandBuffer>,
}
// static mut log_300: Vec<String> = vec!();
unsafe extern "system" fn debug_callback(
_message_severity: vk::DebugUtilsMessageSeverityFlagBitsEXT,
_message_types: vk::DebugUtilsMessageTypeFlagsEXT,
p_callback_data: *const vk::DebugUtilsMessengerCallbackDataEXT,
_p_user_data: *mut c_void,
) -> vk::Bool32 {
let str_99 = String::from(CStr::from_ptr((*p_callback_data).p_message).to_string_lossy());
// log_300.push(str_99 );
eprintln!(
"{}",
CStr::from_ptr((*p_callback_data).p_message).to_string_lossy()
);
vk::FALSE
}
fn update_uniform_buffer(
device: &DeviceLoader,
uniform_transform: &mut UniformBufferObject,
ubo_mems: &mut Vec<vk::DeviceMemory>,
ubos: &mut Vec<vk::Buffer>,
current_image: usize,
delta_time: f32
)
{
uniform_transform.model =
Matrix4::from_axis_angle(Vector3::new(1.0, 0.0, 0.0), Deg(0.330) * delta_time)
* uniform_transform.model;
let uni_transform_slice = [uniform_transform.clone()];
let buffer_size = (std::mem::size_of::<UniformBufferObject>() * uni_transform_slice.len()) as u64;
unsafe {
let data_ptr =
device.map_memory(
ubo_mems[current_image],
0,
buffer_size,
vk::MemoryMapFlags::empty(),
).expect("Failed to map memory.") as *mut UniformBufferObject;
data_ptr.copy_from_nonoverlapping(uni_transform_slice.as_ptr(), uni_transform_slice.len());
device.unmap_memory(ubo_mems[current_image]);
}
}
unsafe fn create_buffer
<'a>
(
instance: Arc<erupt::InstanceLoader>,
device: Arc<erupt::DeviceLoader>,
data: &mut AppData,
size: vk::DeviceSize,
usage: vk::BufferUsageFlags,
properties: vk::MemoryPropertyFlags,
)
-> Result<(vk::Buffer, vk::DeviceMemory), &'a str>
{
let buffer_info = vk::BufferCreateInfoBuilder::new()
.size(size)
.usage(usage)
.sharing_mode(vk::SharingMode::EXCLUSIVE);
let buffer = device.create_buffer(&buffer_info, None).unwrap();
let requirements = device.get_buffer_memory_requirements(buffer);
let memory_info = vk::MemoryAllocateInfoBuilder::new()
.allocation_size(requirements.size)
.memory_type_index(get_memory_type_index(instance.clone(), data, properties, requirements).unwrap());
let buffer_memory = device.allocate_memory(&memory_info, None).unwrap();
device.bind_buffer_memory(buffer, buffer_memory, 0).unwrap();
Ok((buffer, buffer_memory))
}
// Enumerate monitors and prompt user to choose one
fn prompt_for_monitor
(event_loop: &EventLoop<()>)
-> MonitorHandle
{
let monitor = event_loop
.available_monitors()
.nth(0)
.expect("Please enter a valid ID");
monitor
}
fn prompt_for_video_mode
(monitor: &MonitorHandle)
-> VideoMode
{
let video_mode = monitor
.video_modes()
.nth(0)
.expect("Please enter a valid ID");
video_mode
}
unsafe fn create_command_pool
<'a>
(
instance: Arc<erupt::InstanceLoader>,
device: Arc<erupt::DeviceLoader>,
data: &AppData,
)
-> Result<vk::CommandPool, &'a str> {
let info = vk::CommandPoolCreateInfoBuilder::new()
.flags(vk::CommandPoolCreateFlags::TRANSIENT)
.queue_family_index(0); // methinks we have yet one queue for now
Ok(device.create_command_pool(&info, None).unwrap())
}
unsafe fn create_command_pools
<'a>
(
instance: Arc<erupt::InstanceLoader>,
device: Arc<erupt::DeviceLoader>,
data: &mut AppData,
)
-> Result<(), &'a str>
{
data.command_pool = create_command_pool(instance.clone(), device.clone(), data)?;
let num_images = data.swapchain_images.len();
for _ in 0..num_images {
let command_pool = create_command_pool(instance.clone(), device.clone(), data)?;
data.command_pools.push(command_pool);
}
Ok(())
}
unsafe fn create_instance
<'a>
(
window: &Window,
entry: &erupt::EntryLoader,
data: &mut AppData
)
-> Result<Arc<erupt::InstanceLoader>, &'a str>
{
let opt = Opt { validation_layers: true };
let application_name = CString::new("Vulkan-Routine-4500").unwrap();
let engine_name = CString::new("No Engine").unwrap();
let app_info = vk::ApplicationInfoBuilder::new()
.application_name(&application_name)
.application_version(vk::make_api_version(0, 1, 0, 0))
.engine_name(&engine_name)
.engine_version(vk::make_api_version(0, 1, 0, 0))
.api_version(vk::make_api_version(0, 1, 0, 0));
let mut instance_extensions = surface::enumerate_required_extensions(&window).unwrap();
if opt.validation_layers {
println!("\nPushing to instance extensions from validations.");
instance_extensions.push(vk::EXT_DEBUG_UTILS_EXTENSION_NAME);
}
let mut instance_layers = Vec::new();
if opt.validation_layers {
println!("\nPushing to instance layers from validation.");
instance_layers.push(LAYER_KHRONOS_VALIDATION);
}
let instance_info = vk::InstanceCreateInfoBuilder::new()
.application_info(&app_info)
.enabled_extension_names(&instance_extensions)
.enabled_layer_names(&instance_layers);
let instance = Arc::new(InstanceLoader::new(&entry, &instance_info).unwrap());
Ok(instance)
}
unsafe fn create_swapchain
<'a>
(
window: &Window,
queue_family: u32,
format: &vk::SurfaceFormatKHR,
instance: Arc<erupt::InstanceLoader>,
device: Arc<erupt::DeviceLoader>,
data: &mut AppData,
)
-> Result<(), &'a str>
{
let queue = device.get_device_queue(queue_family, 0);
let surface_caps = instance.get_physical_device_surface_capabilities_khr(data.physical_device, data.surface).unwrap();
let mut image_count = surface_caps.min_image_count + 1;
if surface_caps.max_image_count > 0 && image_count > surface_caps.max_image_count {
image_count = surface_caps.max_image_count;
}
let swapchain_image_extent = match surface_caps.current_extent {
vk::Extent2D {
width: u32::MAX,
height: u32::MAX,
} => {
let PhysicalSize { width, height } = window.inner_size();
vk::Extent2D { width, height }
}
normal => normal,
};
let swapchain_info = vk::SwapchainCreateInfoKHRBuilder::new()
.surface(data.surface)
.min_image_count(image_count)
.image_format(format.format)
.image_color_space(format.color_space)
.image_extent(swapchain_image_extent)
.image_array_layers(1)
.image_usage(vk::ImageUsageFlags::COLOR_ATTACHMENT)
.image_sharing_mode(vk::SharingMode::EXCLUSIVE)
.pre_transform(surface_caps.current_transform)
.composite_alpha(vk::CompositeAlphaFlagBitsKHR::OPAQUE_KHR)
.present_mode(data.present_mode)
.clipped(true)
.old_swapchain(vk::SwapchainKHR::null());
let swapchain = device.create_swapchain_khr(&swapchain_info, None).unwrap();
let swapchain_images = device.get_swapchain_images_khr(swapchain, None).unwrap().into_vec();
let swapchain_image_views: Vec<_> = swapchain_images
.iter()
.map(|swapchain_image| {
let image_view_info = vk::ImageViewCreateInfoBuilder::new()
.image(*swapchain_image)
.view_type(vk::ImageViewType::_2D)
.format(format.format)
.components(vk::ComponentMapping {
r: vk::ComponentSwizzle::IDENTITY,
g: vk::ComponentSwizzle::IDENTITY,
b: vk::ComponentSwizzle::IDENTITY,
a: vk::ComponentSwizzle::IDENTITY,
})
.subresource_range(
vk::ImageSubresourceRangeBuilder::new()
.aspect_mask(vk::ImageAspectFlags::COLOR)
.base_mip_level(0)
.level_count(1)
.base_array_layer(0)
.layer_count(1)
.build(),
);
unsafe { device.create_image_view(&image_view_info, None) }.unwrap()
})
.collect();
data.swapchain = swapchain;
data.swapchain_images = swapchain_images;
data.swapchain_image_views = swapchain_image_views;
data.swapchain_image_extent = swapchain_image_extent;
Ok(())
}
unsafe fn create_render_pass
<'a>
(
instance: Arc<erupt::InstanceLoader>,
device: Arc<erupt::DeviceLoader>,
data: &mut AppData
)
-> Result<(), &'a str>
{
let attachments = vec![
vk::AttachmentDescriptionBuilder::new()
.format(data.format)
.samples(vk::SampleCountFlagBits::_1)
.load_op(vk::AttachmentLoadOp::CLEAR)
.store_op(vk::AttachmentStoreOp::STORE)
.stencil_load_op(vk::AttachmentLoadOp::DONT_CARE)
.stencil_store_op(vk::AttachmentStoreOp::DONT_CARE)
.initial_layout(vk::ImageLayout::UNDEFINED)
.final_layout(vk::ImageLayout::PRESENT_SRC_KHR),
vk::AttachmentDescriptionBuilder::new()
.format(vk::Format::D32_SFLOAT)
.samples(vk::SampleCountFlagBits::_1)
.load_op(vk::AttachmentLoadOp::CLEAR)
.store_op(vk::AttachmentStoreOp::DONT_CARE)
.stencil_load_op(vk::AttachmentLoadOp::DONT_CARE)
.stencil_store_op(vk::AttachmentStoreOp::DONT_CARE)
.initial_layout(vk::ImageLayout::UNDEFINED)
.final_layout(vk::ImageLayout::DEPTH_STENCIL_ATTACHMENT_OPTIMAL)
];
let depth_attach_ref = vk::AttachmentReferenceBuilder::new()
.attachment(1)
.layout(vk::ImageLayout::DEPTH_STENCIL_ATTACHMENT_OPTIMAL);
let color_attachment_refs = vec![vk::AttachmentReferenceBuilder::new()
.attachment(0)
.layout(vk::ImageLayout::COLOR_ATTACHMENT_OPTIMAL)];
let subpasses = vec![vk::SubpassDescriptionBuilder::new()
.pipeline_bind_point(vk::PipelineBindPoint::GRAPHICS)
.color_attachments(&color_attachment_refs)
.depth_stencil_attachment(&depth_attach_ref)];
let dependencies = vec![vk::SubpassDependencyBuilder::new()
.src_subpass(vk::SUBPASS_EXTERNAL)
.dst_subpass(0)
.src_stage_mask(vk::PipelineStageFlags::COLOR_ATTACHMENT_OUTPUT)
.src_access_mask(vk::AccessFlags::empty())
.dst_stage_mask(vk::PipelineStageFlags::COLOR_ATTACHMENT_OUTPUT)
.dst_access_mask(vk::AccessFlags::COLOR_ATTACHMENT_WRITE)];
let render_pass_info = vk::RenderPassCreateInfoBuilder::new()
.attachments(&attachments)
.subpasses(&subpasses)
.dependencies(&dependencies);
data.render_pass = device.create_render_pass(&render_pass_info, None).unwrap();
Ok(())
}
unsafe fn create_descriptor_pool
<'a>
(
device: Arc<erupt::DeviceLoader>,
data: &mut AppData,
)
-> Result<(), &'a str>
{
let ubo_size = vk::DescriptorPoolSizeBuilder::new()
._type(vk::DescriptorType::UNIFORM_BUFFER)
.descriptor_count(data.swapchain_images.len() as u32);
let sampler_size = vk::DescriptorPoolSizeBuilder::new()
._type(vk::DescriptorType::COMBINED_IMAGE_SAMPLER)
.descriptor_count(data.swapchain_images.len() as u32);
let pool_sizes = &[ubo_size, sampler_size];
let info = vk::DescriptorPoolCreateInfoBuilder::new()
.pool_sizes(pool_sizes)
.max_sets(data.swapchain_images.len() as u32);
data.descriptor_pool = device.create_descriptor_pool(&info, None).unwrap();
Ok(())
}
unsafe fn create_descriptor_set_layout
<'a>
(
device: Arc<erupt::DeviceLoader>,
data: &mut AppData,
)
-> Result<(), &'a str>
{
let samplers = [vk::Sampler::null()];
let sampler_bindings = [vk::DescriptorSetLayoutBindingBuilder::new()
.binding(0)
.descriptor_type(vk::DescriptorType::UNIFORM_BUFFER)
.descriptor_count(1)
.stage_flags(vk::ShaderStageFlags::VERTEX)
.immutable_samplers(&samplers)];
let descriptor_set_layout_info = vk::DescriptorSetLayoutCreateInfoBuilder::new()
.flags(vk::DescriptorSetLayoutCreateFlags::empty())
.bindings(&sampler_bindings);
data.descriptor_set_layouts.push(device.create_descriptor_set_layout(&descriptor_set_layout_info, None).unwrap());
Ok(())
}
unsafe fn create_pipeline
<'a>
(
device: Arc<erupt::DeviceLoader>,
data: &mut AppData,
)
-> Result<(), &'a str>
{
let entry_point = CString::new("main").unwrap();
let vert_decoded = utils::decode_spv(SHADER_VERT).unwrap();
let module_info = vk::ShaderModuleCreateInfoBuilder::new().code(&vert_decoded);
let shader_vert = device.create_shader_module(&module_info, None).unwrap();
let frag_decoded = utils::decode_spv(SHADER_FRAG).unwrap();
let module_info = vk::ShaderModuleCreateInfoBuilder::new().code(&frag_decoded);
let shader_frag = device.create_shader_module(&module_info, None).unwrap();
let shader_stages = vec![
vk::PipelineShaderStageCreateInfoBuilder::new()
.stage(vk::ShaderStageFlagBits::VERTEX)
.module(shader_vert)
.name(&entry_point),
vk::PipelineShaderStageCreateInfoBuilder::new()
.stage(vk::ShaderStageFlagBits::FRAGMENT)
.module(shader_frag)
.name(&entry_point),
];
let vertex_buffer_bindings_desc_info = vk::VertexInputBindingDescriptionBuilder::new()
.binding(0)
.stride(std::mem::size_of::<VertexV3>() as u32)
.input_rate(vk::VertexInputRate::VERTEX);
let vert_buff_att_desc_info_pos = vk::VertexInputAttributeDescriptionBuilder::new()
.location(0)
.binding(0)
.format(vk::Format::R32G32B32A32_SFLOAT)
.offset(offset_of!(VertexV3, pos) as u32,);
let vert_buff_att_desc_info_color = vk::VertexInputAttributeDescriptionBuilder::new()
.location(1)
.binding(0)
.format(vk::Format::R32G32B32A32_SFLOAT)
.offset(offset_of!(VertexV3, color) as u32,);
let vertex_input = vk::PipelineVertexInputStateCreateInfoBuilder::new()
.flags(vk::PipelineVertexInputStateCreateFlags::empty())
.vertex_binding_descriptions(&[vertex_buffer_bindings_desc_info])
.vertex_attribute_descriptions(&[vert_buff_att_desc_info_pos, vert_buff_att_desc_info_color])
.build_dangling();
let input_assembly = vk::PipelineInputAssemblyStateCreateInfoBuilder::new()
.topology(vk::PrimitiveTopology::TRIANGLE_LIST)
.primitive_restart_enable(false);
let viewports = vec![vk::ViewportBuilder::new()
.x(0.0)
.y(0.0)
.width(data.swapchain_image_extent.width as f32)
.height(data.swapchain_image_extent.height as f32)
.min_depth(0.0)
.max_depth(1.0)];
let scissors = vec![vk::Rect2DBuilder::new()
.offset(vk::Offset2D { x: 0, y: 0 })
.extent(data.swapchain_image_extent)];
let viewport_state = vk::PipelineViewportStateCreateInfoBuilder::new()
.viewports(&viewports)
.scissors(&scissors);
let rasterizer = vk::PipelineRasterizationStateCreateInfoBuilder::new()
.depth_clamp_enable(true)
.rasterizer_discard_enable(false)
.polygon_mode(vk::PolygonMode::LINE)
.line_width(1.0)
.cull_mode(vk::CullModeFlags::NONE)
.front_face(vk::FrontFace::COUNTER_CLOCKWISE);
let multisampling = vk::PipelineMultisampleStateCreateInfoBuilder::new()
.sample_shading_enable(false)
.rasterization_samples(vk::SampleCountFlagBits::_1);
let color_blend_attachments = vec![vk::PipelineColorBlendAttachmentStateBuilder::new()
.color_write_mask(
vk::ColorComponentFlags::R
| vk::ColorComponentFlags::G
| vk::ColorComponentFlags::B
| vk::ColorComponentFlags::A,
)
.blend_enable(false)];
let color_blending = vk::PipelineColorBlendStateCreateInfoBuilder::new()
.logic_op_enable(false)
.attachments(&color_blend_attachments);
let pipeline_stencil_info = vk::PipelineDepthStencilStateCreateInfoBuilder::new()
.depth_test_enable(false)
.depth_write_enable(true)
.depth_compare_op(vk::CompareOp::LESS)
.depth_bounds_test_enable(false)
.min_depth_bounds(0.0)
.max_depth_bounds(1.0)
.front(vk::StencilOpStateBuilder::new().build())
.back(vk::StencilOpStateBuilder::new().build());
let push_size = std::mem::size_of::<Matrix4<f32>>() as u32;
let push_constants_ranges = [vk::PushConstantRangeBuilder::new()
.stage_flags(vk::ShaderStageFlags::VERTEX)
.offset(0)
.size(push_size)];
let pipeline_layout_info = vk::PipelineLayoutCreateInfoBuilder::new()
.push_constant_ranges(&push_constants_ranges)
.set_layouts(&data.descriptor_set_layouts);
let pipeline_layout = device.create_pipeline_layout(&pipeline_layout_info, None).unwrap();
let attachments = vec![
vk::AttachmentDescriptionBuilder::new()
.format(data.format)
.samples(vk::SampleCountFlagBits::_1)
.load_op(vk::AttachmentLoadOp::CLEAR)
.store_op(vk::AttachmentStoreOp::STORE)
.stencil_load_op(vk::AttachmentLoadOp::DONT_CARE)
.stencil_store_op(vk::AttachmentStoreOp::DONT_CARE)
.initial_layout(vk::ImageLayout::UNDEFINED)
.final_layout(vk::ImageLayout::PRESENT_SRC_KHR),
vk::AttachmentDescriptionBuilder::new()
.format(vk::Format::D32_SFLOAT)
.samples(vk::SampleCountFlagBits::_1)
.load_op(vk::AttachmentLoadOp::CLEAR)
.store_op(vk::AttachmentStoreOp::DONT_CARE)
.stencil_load_op(vk::AttachmentLoadOp::DONT_CARE)
.stencil_store_op(vk::AttachmentStoreOp::DONT_CARE)
.initial_layout(vk::ImageLayout::UNDEFINED)
.final_layout(vk::ImageLayout::DEPTH_STENCIL_ATTACHMENT_OPTIMAL)
];
let depth_attach_ref = vk::AttachmentReferenceBuilder::new()
.attachment(1)
.layout(vk::ImageLayout::DEPTH_STENCIL_ATTACHMENT_OPTIMAL);
let pipeline_info = vk::GraphicsPipelineCreateInfoBuilder::new()
.stages(&shader_stages)
.vertex_input_state(&vertex_input)
.input_assembly_state(&input_assembly)
.depth_stencil_state(&pipeline_stencil_info)
.viewport_state(&viewport_state)
.rasterization_state(&rasterizer)
.multisample_state(&multisampling)
.color_blend_state(&color_blending)
.layout(pipeline_layout)
.render_pass(data.render_pass)
.subpass(0);
data.pipeline = device.create_graphics_pipelines(vk::PipelineCache::null(), &[pipeline_info], None).unwrap()[0];
device.destroy_shader_module(shader_vert, None);
device.destroy_shader_module(shader_frag, None);
Ok(())
}
unsafe fn create_color_objects
<'a>
(
instance: Arc<erupt::InstanceLoader>,
device: Arc<erupt::DeviceLoader>,
data: &mut AppData,
)
-> Result<(), &'a str> {
// We don't have one of these in our pipeline atm.
Ok(())
}
unsafe fn create_depth_objects
<'a>
(
instance: Arc<erupt::InstanceLoader>,
device: Arc<erupt::DeviceLoader>,
data: &mut AppData,
)
-> Result<(), &'a str>
{
let depth_image_info = vk::ImageCreateInfoBuilder::new()
.flags(vk::ImageCreateFlags::empty())
.image_type(vk::ImageType::_2D)
.format(vk::Format::D32_SFLOAT)
.extent(vk::Extent3D {
width: data.swapchain_image_extent.width,
height: data.swapchain_image_extent.height,
depth: 1,
})
.mip_levels(1)
.array_layers(1)
.samples(vk::SampleCountFlagBits::_1)
.tiling(vk::ImageTiling::OPTIMAL)
.usage(vk::ImageUsageFlags::DEPTH_STENCIL_ATTACHMENT)
.sharing_mode(vk::SharingMode::EXCLUSIVE)
.queue_family_indices(&[0])
.initial_layout(vk::ImageLayout::UNDEFINED);
let depth_image = device.create_image(&depth_image_info, None)
.expect("Failed to create depth (texture) Image.");
let dpth_img_mem_reqs = device.get_image_memory_requirements(depth_image);
let depth_img_mem_info = vk::MemoryAllocateInfoBuilder::new()
.memory_type_index(1)
.allocation_size(dpth_img_mem_reqs.size);
let depth_image_memory = device.allocate_memory(&depth_img_mem_info, None)
.expect("Failed to alloc mem for depth image.");
// device.bind_image_memory(depth_image, depth_image_memory, 0)
// .expect("Failed to bind depth image memory.");
let depth_image_view_info = vk::ImageViewCreateInfoBuilder::new()
.flags(vk::ImageViewCreateFlags::empty())
.image(depth_image)
.view_type(vk::ImageViewType::_2D)
.format(vk::Format::D32_SFLOAT)
.components(vk::ComponentMapping {
r: vk::ComponentSwizzle::IDENTITY,
g: vk::ComponentSwizzle::IDENTITY,
b: vk::ComponentSwizzle::IDENTITY,
a: vk::ComponentSwizzle::IDENTITY,
})
.subresource_range(vk::ImageSubresourceRange {
aspect_mask: vk::ImageAspectFlags::DEPTH,
base_mip_level: 0,
level_count: 1,
base_array_layer: 0,
layer_count: 1,
});
let depth_image_view = device.create_image_view(&depth_image_view_info, None).expect("Failed to create image view.");
Ok(())
}
unsafe fn create_image_view
<'a>
(
device: Arc<erupt::DeviceLoader>,
image: vk::Image,
format: vk::Format,
surface_format_khr: erupt::extensions::khr_surface::SurfaceFormatKHR, // likely not needed
aspects: vk::ImageAspectFlags,
mip_levels: u32,
)
-> Result<vk::ImageView, &'a str>
{
let subresource_range = vk::ImageSubresourceRangeBuilder::new()
.aspect_mask(aspects)
.base_mip_level(0)
.level_count(mip_levels)
.base_array_layer(0)
.layer_count(1);
let image_view_info = vk::ImageViewCreateInfoBuilder::new()
.image(image)
.view_type(vk::ImageViewType::_2D)
.format(format)
.subresource_range(*subresource_range);
Ok(device.create_image_view(&image_view_info, None).unwrap())
}
unsafe fn copy_buffer_to_image
<'a>
(
device: Arc<erupt::DeviceLoader>,
data: &AppData,
buffer: vk::Buffer,
image: vk::Image,
width: u32,
height: u32,
)
-> Result<(), &'a str>
{
let command_buffer = begin_single_time_commands(device.clone(), data).unwrap();
let subresource = vk::ImageSubresourceLayersBuilder::new()
.aspect_mask(vk::ImageAspectFlags::COLOR)
.mip_level(0)
.base_array_layer(0)
.layer_count(1)
.build();
let region = vk::BufferImageCopyBuilder::new()
.buffer_offset(0)
.buffer_row_length(0)
.buffer_image_height(0)
.image_subresource(subresource)
.image_offset(vk::Offset3D { x:0, y: 0, z: 0 })
.image_extent(vk::Extent3D {
width,
height,
depth: 1,
});
device.cmd_copy_buffer_to_image(
command_buffer,
buffer,
image,
vk::ImageLayout::TRANSFER_DST_OPTIMAL,
&[region],
);
end_single_time_commands(device, data, command_buffer).unwrap();
Ok(())
}
unsafe fn begin_single_time_commands
<'a>
(
device: Arc<erupt::DeviceLoader>,
data: &AppData,
)
-> Result<vk::CommandBuffer, &'a str>
{
let cb_info = vk::CommandBufferAllocateInfoBuilder::new()
.level(vk::CommandBufferLevel::PRIMARY)
.command_pool(data.command_pool)
.command_buffer_count(1);
let command_buffer = device.allocate_command_buffers(&cb_info).unwrap()[0];
let begin_info = vk::CommandBufferBeginInfoBuilder::new()
.flags(vk::CommandBufferUsageFlags::ONE_TIME_SUBMIT);
device.begin_command_buffer(command_buffer, &begin_info);
Ok(command_buffer)
}
unsafe fn end_single_time_commands
<'a>
(
device: Arc<erupt::DeviceLoader>,
data: &AppData,
command_buffer: vk::CommandBuffer,
)
-> Result<(), &'a str>
{
device.end_command_buffer(command_buffer);
let command_buffers = &[command_buffer];
let info = vk::SubmitInfoBuilder::new()
.command_buffers(command_buffers);
device.queue_submit(data.graphics_queue, &[info], vk::Fence::null()).unwrap();
device.queue_wait_idle(data.graphics_queue).unwrap();
Ok(device.free_command_buffers(data.command_pool, &[command_buffer]))
}
unsafe fn create_command_buffers
<'a>
(
device: Arc<erupt::DeviceLoader>,
data: &mut AppData,
)
-> Result<(), &'a str>
{
data.command_buffers = vec![vk::CommandBuffer::null(); data.framebuffers.len()];
Ok(())
}
unsafe fn create_descriptor_sets
<'a>
(
device: Arc<erupt::DeviceLoader>,
data: &mut AppData,
)
-> Result<(), &'a str>
{
let layouts = &data.descriptor_set_layouts;
// Different from the example, he has only one layout; we also have only one,
// but want to generalize.
//sic, ours has more than one descrptor set layout. This could be wrong. Maybe there is only one possible, but the API called for multiple possble.
let info = vk::DescriptorSetAllocateInfoBuilder::new()
.descriptor_pool(data.descriptor_pool)
.set_layouts(&layouts);
// .build_dangling();
data.descriptor_sets = device.allocate_descriptor_sets(&info).unwrap().to_vec();
for i in 0..data.swapchain_images.len() {
let info = vk::DescriptorBufferInfoBuilder::new()
.buffer(data.uniform_buffers[i])
.offset(0)
.range(std::mem::size_of::<UniformBufferObject>() as u64);
let buffer_info = &[info];
let ubo_write = vk::WriteDescriptorSetBuilder::new()
.dst_set(data.descriptor_sets[i])
.dst_binding(0)
.dst_array_element(0)
.descriptor_type(vk::DescriptorType::UNIFORM_BUFFER)
.buffer_info(buffer_info);
// let info = vk::DescriptorImageInfoBuilder::new()
// .image_layout(vk::ImageLayout::SHADER)
// This part was for textures in the example; we don't have any of these yet.
// let info = vk::DescriptorImageInfoBuilder::new()
// .image_layout(::ImageLayout::SHADER_READ_ONLY_OPTIMAL)
let d_write_builder = vk::WriteDescriptorSetBuilder::new()
.dst_set(data.descriptor_sets[0])
.dst_binding(0)
.dst_array_element(0)
.descriptor_type(vk::DescriptorType::UNIFORM_BUFFER)
.buffer_info(buffer_info);
device.update_descriptor_sets(&[ubo_write], &[]);
// device.update_descriptor_sets(&[ubo_write], &[] as [vk::CopyDescriptorSet]);
}
Ok(())
}
unsafe fn get_depth_format
<'a>
(
instance: Arc<erupt::InstanceLoader>,
data: &AppData,
) -> Result<vk::Format, &'a str> {
let candidates = &[
vk::Format::D32_SFLOAT,
vk::Format::D32_SFLOAT_S8_UINT,
vk::Format::D24_UNORM_S8_UINT,
];
get_supported_format(
instance,
data,
candidates,
vk::ImageTiling::OPTIMAL,
vk::FormatFeatureFlags::DEPTH_STENCIL_ATTACHMENT,
)
}
unsafe fn get_supported_format
<'a>
(
instance: Arc<erupt::InstanceLoader>,
data: &AppData,
candidates: &[vk::Format],
tiling: vk::ImageTiling,
features: vk::FormatFeatureFlags,
)
-> Result<vk::Format, &'a str>
{
candidates
.iter()
.cloned()
.find(|f| {
let properties = instance.get_physical_device_format_properties(data.physical_device, *f);
match tiling {
vk::ImageTiling::LINEAR => properties.linear_tiling_features.contains(features),
vk::ImageTiling::OPTIMAL => properties.optimal_tiling_features.contains(features),
_ => false,
}
})
.ok_or_else(|| "false")
}
unsafe fn create_uniform_buffers
<'a>
(
instance: Arc<erupt::InstanceLoader>,
device: Arc<erupt::DeviceLoader>,
data: &mut AppData,
)
-> Result<(), &'a str>
{
data.uniform_buffers.clear();
data.uniform_buffers_memory.clear();
for _ in 0..data.swapchain_images.len() {
let (uniform_buffer, uniform_buffer_memory) = create_buffer
(
instance.clone(),
device.clone(),
data,
std::mem::size_of::<UniformBufferObject>() as u64,
vk::BufferUsageFlags::UNIFORM_BUFFER,
vk::MemoryPropertyFlags::HOST_COHERENT | vk::MemoryPropertyFlags::HOST_VISIBLE,
).unwrap();
data.uniform_buffers.push(uniform_buffer);
data.uniform_buffers_memory.push(uniform_buffer_memory);
}
Ok(())
}
unsafe fn get_memory_type_index
<'a>
(
instance: Arc<erupt::InstanceLoader>,
data: &AppData,
properties: vk::MemoryPropertyFlags,
requirements: vk::MemoryRequirements,
)
-> Result<u32, &'a str>
{
let memory = instance.get_physical_device_memory_properties(data.physical_device);
(0..memory.memory_type_count)
.find(|i| {
let suitable = (requirements.memory_type_bits & (i << i)) != 0;
let memory_type = memory.memory_types[*i as usize];
suitable && memory_type.property_flags.contains(properties)
})
.ok_or_else(|| "Failed to create suitable memory type")
}
unsafe fn create_index_buffer
<'a>
(
instance: Arc<erupt::InstanceLoader>,
device: Arc<erupt::DeviceLoader>,
data: &mut AppData,
)
-> Result<(), &'a str>
{
let size = (std::mem::size_of::<u32>() * data.indices.len()) as u64;
let (staging_buffer, staging_buffer_memory) = create_buffer
(
instance.clone(),
device.clone(),
data,
size,
vk::BufferUsageFlags::TRANSFER_SRC,
vk::MemoryPropertyFlags::HOST_COHERENT | vk::MemoryPropertyFlags::HOST_VISIBLE,
).unwrap();
let memory = device.map_memory(staging_buffer_memory, 0, size, vk::MemoryMapFlags::empty()).unwrap();
memcpy(data.indices.as_ptr(), memory.cast(), data.indices.len());
device.unmap_memory(staging_buffer_memory);
let (index_buffer, index_buffer_memory) = create_buffer
(
instance.clone(),
device.clone(),
data,
size,
vk::BufferUsageFlags::TRANSFER_DST | vk::BufferUsageFlags::INDEX_BUFFER,
vk::MemoryPropertyFlags::DEVICE_LOCAL,
).unwrap();
data.index_buffer = index_buffer;
data.index_buffer_memory = index_buffer_memory;
copy_buffer(device.clone(), data, staging_buffer, index_buffer, size).unwrap();
device.destroy_buffer(staging_buffer, None);
device.free_memory(staging_buffer_memory, None);
Ok(())
}
unsafe fn copy_buffer
<'a>
(
device: Arc<erupt::DeviceLoader>,
data: &mut AppData,
source: vk::Buffer,
destination: vk::Buffer,
size: vk::DeviceSize,
)
-> Result<(), &'a str>
{
let command_buffer = begin_single_time_commands(device.clone(), data).unwrap();
let regions = vk::BufferCopyBuilder::new().size(size);
device.cmd_copy_buffer(command_buffer, source, destination, &[regions]);
end_single_time_commands(device.clone(), data, command_buffer).unwrap();
Ok(())
}
unsafe fn create_vertex_buffer
<'a>
(
instance: Arc<erupt::InstanceLoader>,
device: Arc<erupt::DeviceLoader>,
data: &mut AppData,
)
-> Result<(), &'a str>
{
let size = (std::mem::size_of::<VertexV3>() * data.vertices.len()) as u64;
let (staging_buffer, staging_buffer_memory) = create_buffer
(
instance.clone(),
device.clone(),
data,
size,
vk::BufferUsageFlags::TRANSFER_DST,
vk::MemoryPropertyFlags::HOST_COHERENT | vk::MemoryPropertyFlags::HOST_VISIBLE,
).unwrap();
let memory = device.map_memory(staging_buffer_memory, 0, size, vk::MemoryMapFlags::empty()).unwrap();
memcpy(data.vertices.as_ptr(), memory.cast(), data.vertices.len());
device.unmap_memory(staging_buffer_memory);
let (vertex_buffer, vertex_buffer_memory) = create_buffer
(
instance.clone(),
device.clone(),
data,
size,
vk::BufferUsageFlags::TRANSFER_DST | vk::BufferUsageFlags::VERTEX_BUFFER,
vk::MemoryPropertyFlags::DEVICE_LOCAL,
).unwrap();
data.vertex_buffer = vertex_buffer;
data.vertex_buffer_memory = vertex_buffer_memory;
copy_buffer(device.clone(), data, staging_buffer, vertex_buffer, size).unwrap();
device.destroy_buffer(staging_buffer, None);
device.free_memory(staging_buffer_memory, None);
Ok(())
}
// this is a bad function signature in the sense of hiding the data consumed vs that accessed on
// the data object monolith.
unsafe fn create_framebuffers
<'a>
(
device: Arc<erupt::DeviceLoader>,
data: &mut AppData,
)
-> Result<(), &'a str>
{
data.framebuffers = data
.swapchain_image_views
.iter()
.map(|image_view| {
let attachments = vec![*image_view, data.depth_image_view];
let create_info = vk::FramebufferCreateInfoBuilder::new()
.render_pass(data.render_pass)
.attachments(&attachments)
.width(data.swapchain_extent.width)
.height(data.swapchain_extent.height)
.layers(1);
device.create_framebuffer(&create_info, None).unwrap()
}).collect();
Ok(())
}
fn load_model
<'a>
(data: &mut AppData)
-> Result<(), &'a str>
{
// let model_path: &'static str = "assets/terrain__002__.obj";
let model_path: & str = "assets/terrain__002__.obj";
let (models, materials) = tobj::load_obj(&model_path, &tobj::LoadOptions::default()).expect("Failed to load model object!");
let model = models[0].clone();
let materials = materials.unwrap();
let material = materials.clone().into_iter().nth(0).unwrap();
let mut vertices_terr = vec![];
let mesh = model.mesh;
let total_vertices_count = mesh.positions.len() / 3;
for i in 0..total_vertices_count {
let vertex = VertexV3 {
pos: [
mesh.positions[i * 3],
mesh.positions[i * 3 + 1],
mesh.positions[i * 3 + 2],
1.0,
],
color: [0.8, 0.20, 0.30, 0.40],
};
vertices_terr.push(vertex);
};
let mut indices_terr_full = mesh.indices.clone();
let mut indices_terr = vec![];
for i in 0..(indices_terr_full.len() / 2) {
indices_terr.push(indices_terr_full[i]);
}
data.vertices = vertices_terr;
data.indices = indices_terr;
Ok(())
}
#[repr(C)]
#[derive(Debug, Clone, Copy)]
struct FrameData {
present_semaphore: vk::Semaphore,
render_semaphore: vk::Semaphore,
command_pool: vk::CommandPool,
command_buffer: vk::CommandBuffer,
}
#[repr(C)]
#[derive(Debug, Clone, Copy)]
struct VertexV3 {
pos: [f32; 4],
color: [f32; 4],
}
#[repr(C)]
#[derive(Clone, Debug, Copy)]
struct PushConstants {
model: Matrix4<f32>,
view: Matrix4<f32>,
proj: Matrix4<f32>,
}
#[repr(C)]
#[derive(Clone, Debug, Copy)]
struct UniformBufferObject {
model: Matrix4<f32>,
view: Matrix4<f32>,
proj: Matrix4<f32>,
}
#[derive(Debug, StructOpt)]
struct Opt {
#[structopt(short, long)]
validation_layers: bool,
}
@kulicuu
Copy link
Author

kulicuu commented Feb 14, 2022

Simplest impl of vertex buffer data mapping between CPU to GPU, memory transfer, and configuration to pass to shaders. Parameters are specific to my system, for example the memoryType choice is dependent on hardware availability.

@kulicuu
Copy link
Author

kulicuu commented Feb 25, 2022

Massive refactor now because of the need to recycle command buffers via command-pool resets. This will allow push-constant updates from control input.

@kulicuu
Copy link
Author

kulicuu commented Mar 3, 2022

New to me rendering technique: https://www.matthewtancik.com/nerf

I neural field rendering

@kulicuu
Copy link
Author

kulicuu commented Mar 3, 2022

About 70% through a refactor inspired by Vulkanalia example at secondary-command-buffer (most advanced version), though I'm looking at putting more stuff into Arcs, not completely sure yet what all will be needed shared across threads. Once the secondary command buffer and framebuffer recreation is set up, the push constants can be updated with control input for camera/view flight. After that I may do a frustrum culling function to cut the terrain mesh down to something more manageable for rendering.

@kulicuu
Copy link
Author

kulicuu commented Mar 9, 2022

Refactor [based on Vulkanalia's] (https://github.com/KyleMayes/vulkanalia/blob/master/tutorial/src/32_secondary_command_buffers.rs) continuing. I kept some old stuff and now wondering what I want in ubo vs in push constants.

@kulicuu
Copy link
Author

kulicuu commented Mar 17, 2022

I had the Vulkanalia example derived refactor compiling but erroring in a way I couldn't trace, so refactoring from the working routine forward, starting with simple improvments, moving on to object encapsulation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment