Jackalope / jen (public) (License: GPLv3 or later version) (since 2018-10-24) (hash sha1)
----> ABOUT:

3D rendering and computing framework based on Vulkan API.

Libraries:
- simdcpp submodule (see my simdcpp repo)
- jmath submodule (see my jmath repo)
- mesh (constexpr generation of cubes, spheres, icosahedrons subdivisions)
- atlas (1D lines and 2D rectangles cutting)
- jlib submodule (see my jlib repo)
- jrf submodule (see my jrf repo)
- vkw (Vulkan API C++ wrapper)
Modules:
- compute (run compute shaders on gpu)
- graphics (draw models with clustered forward rendering and onscreen text)
- resource manager (load meshes, models, textures, scene data from
files and create related objects in graphics module)

----> INSTALLING:

To download all the parts of this framework it's enough to launch
git clone with recursive flag:

$ git clone —recursive ssh://rocketgit@ssh.rocketgit.com/user/Jackalope/jen

After this look at git tags:

$ git tag

It is recommended to use a tagged version instead of the latest commit,
because the first commit after the tagged one mostly includes incompatible
parts of future changes for the next version.

$ git checkout v0.1.0

----> DEPENDENCIES:

To use JEN as CMake subdirectory and successfully build programs with it
you need to make sure you have all of its dependencies:
- compiler: Clang or GCC, support for C++17. Clang 10+ or GCC 9+ is recommended,
compiling on Windows OS is tricky and requires something like MinGW with MSYS,
there are also some complications to go through to make dependencies work;
- GLFW3 library, supported version is 3.2.1;
- FreeType library, if graphics module will be used;
- Vulkan API headers, and optional validation layers to debug sneaky problems,
you also need Vulkan support in your graphics driver to run compiled programs;
- LibZip can be necessary, if JRF is used to read zip files;
- CMake, for obvious reasons;
- glslangValidator to compile shader for the graphics module.

CMake must be able to find GLFW3, Vulkan and FreeType (for graphics)
with find_package().

----> HOW TO USE IT:

To use JEN, you need to add it as a subdirectory:

add_subdirecroty(${PATH_TO_JEN})

There are several configuration options:
- JEN_MODULE_COMPUTE - turn compute module on for compiling and including;
- JEN_MODULE_GRAPHICS - turn graphics module on ...;
- JEN_MULTITHREADED_DRAW_FRAME - draw_frame function will use thread pool queue
instead of linear executing;
- JEN_MODULE_RESOURCE_MANAGER - resource manager module ON, if graphics is ON;
- JEN_VLK_VALIDATION - enable Vulkan Validation Layers to debug some errors
related to JEN. This will often produce false-positive,
as well as true-positive errors.

Look in CMakeLists.txt at JenExamples repo for details on how to use and
configure JEN automatically:

$ git clone ssh://rocketgit@ssh.rocketgit.com/user/Jackalope/JenExamples

Also I recommend to compile and run examples to make sure it works correctly.

----> SUPPORTED HARDWARE:

JEN has not been tested well, because it requires running it on large amount of
different hardware to do so. It must work with mesa driver and modern
Intel i965 GPUs as well as AMD GPUs.


----> DOCUMENTATION:

You can generate Doxygen documentation, to do so
turn on any of JEN_DOXYGEN_* options and run documentation target in cmake:

$ cmake -G %1 -DJEN_DOXYGEN_HTML=ON -DJEN_DOXYGEN_LATEX=ON
$ cmake —build —target documentation

Resource manager is not documented because it still requires large enhancements.
List of commits:
Subject Hash Author Date (UTC)
huge jen interface refactoring, added configuration options fabb80119c23c204118af1f4f697e74e8cbd61b8 Jackalope 2020-05-08 21:59:31
added settings, changed some namespaces and type names f0766a6b35304740cb39f93e601447ad7be7e143 Jackalope 2020-05-07 23:48:11
compute storage image support eaffc4ea4768fb67f063102793ca5d781bafb0c8 Jackalope 2020-05-04 17:48:33
get size of vkFormat pixel function 41e309467955bc2e1361a1f9689367f961040bf7 Jackalope 2020-05-04 17:48:08
vkw image to buffer copy 57234d0007df737094aef5b6bab4177ca39bb347 Jackalope 2020-05-04 17:47:08
added flag to wait events instead of polling for interactive apps b9cfe8faa84e5bfc00aa842b16819b28f5e2be10 Jackalope 2020-05-04 17:45:34
added compute info validation 7cd15af2afd064302703eccd1094f4353da35c4f Jackalope 2020-04-26 08:05:18
fixed uninitialized value 8313a622987f51e74b068231cddde3ff34a93625 Jackalope 2020-04-26 08:04:48
disable checking present support when graphics module not loaded b2ff208cef191444cd995c40989c6b9414c1fb9f Jackalope 2020-04-26 02:35:19
added debug extension loading when graphics module is disabled 54bc7c5eff1353559ec29e560a4095cfa9ef33d8 Jackalope 2020-04-26 02:32:45
Noise library: removed excess commentary lines 6b518ad33508161bd0cc8790eb13f4d03d827acc TheArtOfGriefing 2020-04-23 20:56:55
Noise library: added forgotten hash function use in gradient computation. bfce031f8de0410107aaf33009b46e455acd5095 TheArtOfGriefing 2020-04-23 18:40:05
Noise library update: +6 hash functions +1D,2D,3D,4D highly optimized simplex noise +1D,2D,3D,4D white noise 731cba496cf3c253ad365355faae9df45e1e714e TheArtOfGriefing 2020-04-23 17:37:19
simd library moved to simdcpp submodule, also updates to match new version 36fa65052847fbb258e0ceaf2a2c3fab40e5c3a7 Jackalope 2020-04-21 23:19:24
jlib update d7d711f8b289b2a84e58c66d20c0871c21d17350 Jackalope 2020-04-15 09:54:57
hiding clang-10 new warnings 6d3a1a1dbc928d9ed75024848f06f541d55e1580 Jackalope 2020-04-15 09:54:39
vkw new Vulkan API result values 0eed6a08e659a2e58ed35d9870022d235f0d87d4 Jackalope 2020-04-15 09:54:04
removed temporary fence, validation layers still complaining ba2b7277f0db6e549e68619f8f61ecc50066343d Jackalope 2020-04-15 09:41:02
device queues had incorrect orders in memory layout 28bf6b0e793ae988c390571baa5e8654200f6b42 Jackalope 2020-04-15 07:10:19
updated new vulkan enum names a7f4e2fdd01884df5469abda3520ca3901a44532 Jackalope 2020-04-15 07:09:55
Commit fabb80119c23c204118af1f4f697e74e8cbd61b8 - huge jen interface refactoring, added configuration options
Author: Jackalope
Author date (UTC): 2020-05-08 21:59
Committer name: Jackalope
Committer date (UTC): 2020-05-08 21:59
Parent(s): f0766a6b35304740cb39f93e601447ad7be7e143
Signer:
Signing key:
Signing status: N
Tree: 056414d70e685030955a7b9dd1cddadeac39bd47
File Lines added Lines deleted
CMakeLists.txt 41 5
include/jen/allocator/buffer.h 120 0
include/jen/allocator/memory.h 34 0
include/jen/camera.h 0 1
include/jen/compute.h 178 0
include/jen/configuration.h 9 0
include/jen/controls.h 92 0
include/jen/detail/cmd_container.h 4 2
include/jen/detail/descriptors.h 105 0
include/jen/detail/gpu_image.h 149 0
include/jen/framework.h 29 16
include/jen/graphics.h 74 0
include/jen/jrl.h 0 119
include/jen/light.h 31 0
include/jen/resource_manager.h 104 0
include/jen/resources.h 101 0
include/jen/result.h 6 0
include/jen/screen.h 2 2
include/jen/settings.h 145 0
include/jen/window.h 221 0
libs/vkw/include/vkw/instance.h 28 7
src/CMakeLists.txt 54 33
src/allocator/buffer.cpp 378 0
src/allocator/memory.cpp 238 0
src/compute/binding_set.h 0 81
src/compute/bindings.h 0 194
src/compute/cmd_unit.cpp 484 0
src/compute/cmd_unit.h 0 51
src/compute/compute.cpp 212 435
src/compute/compute.h 0 90
src/compute/pipeline.h 0 68
src/configuration.h.in 9 0
src/descriptors.cpp 226 0
src/device/allocator/buffer.cpp 0 123
src/device/allocator/buffer.h 0 129
src/device/allocator/buffer_allocator.cpp 0 94
src/device/allocator/buffer_allocator.h 0 129
src/device/allocator/memory.cpp 0 79
src/device/allocator/memory.h 0 56
src/device/allocator/memory_allocator.cpp 0 53
src/device/allocator/memory_allocator.h 0 64
src/device/device.cpp 21 12
src/device/device.h 5 10
src/framework.cpp 77 39
src/gpu_image.cpp 101 0
src/graphics/cmd_data.cpp 50 47
src/graphics/cmd_data.h 9 6
src/graphics/debug_overlay.cpp 15 19
src/graphics/debug_overlay.h 6 9
src/graphics/draw_data/draw_data.cpp 1 1
src/graphics/draw_data/draw_data.h 1 10
src/graphics/draw_data/text_data/atlas_buffer.cpp 40 40
src/graphics/draw_data/text_data/atlas_buffer.h 14 13
src/graphics/draw_data/text_data/glyphs.cpp 42 41
src/graphics/draw_data/text_data/glyphs.h 5 5
src/graphics/draw_data/text_data/text_data.cpp 13 13
src/graphics/draw_data/text_data/text_data.h 8 8
src/graphics/draw_stages/attachment.cpp 3 2
src/graphics/draw_stages/attachment.h 28 29
src/graphics/draw_stages/clusters.cpp 5 4
src/graphics/draw_stages/clusters.h 2 26
src/graphics/draw_stages/composition/composition.cpp 3 3
src/graphics/draw_stages/composition/composition.h 4 6
src/graphics/draw_stages/descriptors.cpp 0 203
src/graphics/draw_stages/descriptors.h 0 110
src/graphics/draw_stages/draw_stages.cpp 5 5
src/graphics/draw_stages/fonts/fonts.cpp 7 7
src/graphics/draw_stages/fonts/fonts.h 1 3
src/graphics/draw_stages/gpu_image.cpp 0 40
src/graphics/draw_stages/gpu_image.h 0 146
src/graphics/draw_stages/offscreen/offscreen.cpp 5 5
src/graphics/draw_stages/offscreen/offscreen.h 4 11
src/graphics/draw_stages/pass_depthcube.cpp 9 8
src/graphics/draw_stages/pass_depthcube.h 58 58
src/graphics/draw_stages/pass_main.cpp 3 3
src/graphics/draw_stages/swap_chain.cpp 6 6
src/graphics/gpu_transfer/data.cpp 8 8
src/graphics/gpu_transfer/data.h 7 8
src/graphics/gpu_transfer/gpu_transfer.cpp 204 202
src/graphics/gpu_transfer/gpu_transfer.h 32 34
src/graphics/graphics.cpp 100 97
src/graphics/graphics.h 25 80
src/graphics/graphics_interface.cpp 39 30
src/graphics/jrl_defs.h 0 112
src/graphics/model.h 0 51
src/graphics/resources.h 128 0
src/graphics/resources/data.h 0 24
src/graphics/resources/state.h 0 9
src/graphics/resources/text.h 0 106
src/graphics/resources/texture.h 0 33
src/graphics/settings.h 0 67
src/instance/controls.h 0 93
src/instance/instance.cpp 26 11
src/instance/instance.h 4 23
src/instance/window.h 0 223
src/resource_manager/resource_manager.cpp 126 90
src/resource_manager/resource_manager.h 108 0
src/settings.h 0 43
File CMakeLists.txt changed (mode: 100644) (index d8511c8..58f1536)
... ... add_subdirectory(libs/vkw)
83 83
84 84 add_subdirectory(src) add_subdirectory(src)
85 85
86 if(JEN_MODULE_COMPUTE)
87 message(STATUS "JEN_MODULE_COMPUTE is enabled")
88 set(JEN_MODULE_COMPUTE 1)
89 else()
90 set(JEN_MODULE_COMPUTE 0)
91 endif()
92
93 if(JEN_MODULE_GRAPHICS)
94 message(STATUS "JEN_MODULE_GRAPHICS is enabled")
95 set(JEN_MODULE_GRAPHICS 1)
96 else()
97 set(JEN_MODULE_GRAPHICS 0)
98 endif()
99
100 if(JEN_MODULE_RESOURCE_MANAGER)
101 if(JEN_MODULE_GRAPHICS)
102 message(STATUS "JEN_MODULE_RESOURCE_MANAGER is enabled")
103 set(JEN_MODULE_RESOURCE_MANAGER 1)
104 else()
105 error("JEN_MODULE_RESOURCE_MANAGER must be with JEN_MODULE_GRAPHICS")
106 endif()
107 else()
108 set(JEN_MODULE_RESOURCE_MANAGER 0)
109 endif()
110
111 set(JEN_VERSION_MAJOR 0)
112 set(JEN_VERSION_MINOR 1)
113 set(JEN_VERSION_PATCH 0)
114
115 set(JEN_CONFIGURATION_WARNING
116 "This file is generated automatically from /src/configuration.h.in")
117
118 set(JEN_NAME "JEN")
119
120 set(JEN_INCLUDE_DIR ${CMAKE_CURRENT_SOURCE_DIR}/include)
121
122 configure_file(${CMAKE_CURRENT_SOURCE_DIR}/src/configuration.h.in
123 ${JEN_INCLUDE_DIR}/jen/configuration.h)
124
125
86 126 set(JEN_INCLUDE_DIRS set(JEN_INCLUDE_DIRS
87 127
88 128 ${CMAKE_CURRENT_SOURCE_DIR}/include ${CMAKE_CURRENT_SOURCE_DIR}/include
 
... ... set(JEN_INCLUDE_DIRS
100 140 ${VKW_INCLUDE_DIRS} ${VKW_INCLUDE_DIRS}
101 141 ) )
102 142
103 message("\n***---***---***\n")
104 message("JEN libs paths: ")
105 message("${JEN_INCLUDE_DIRS}")
106 message("\n***---***---***\n")
143 message(STATUS "JEN libs paths: ${JEN_INCLUDE_DIRS}")
107 144 set(JEN_INCLUDE_DIRS ${JEN_INCLUDE_DIRS} PARENT_SCOPE) set(JEN_INCLUDE_DIRS ${JEN_INCLUDE_DIRS} PARENT_SCOPE)
108 145
109 146 target_include_directories(ATLAS PUBLIC ${JEN_INCLUDE_DIRS}) target_include_directories(ATLAS PUBLIC ${JEN_INCLUDE_DIRS})
110 147 target_include_directories(JEN PRIVATE ${CMAKE_CURRENT_SOURCE_DIR}/src target_include_directories(JEN PRIVATE ${CMAKE_CURRENT_SOURCE_DIR}/src
111 148 ${JEN_INCLUDE_DIRS}) ${JEN_INCLUDE_DIRS})
112
File include/jen/allocator/buffer.h added (mode: 100644) (index 0000000..c82a2c4)
1 #pragma once
2 #include <vkw/buffer.h>
3 #include <vkw/descriptor_set.h>
4 #include <atlas/atlas.h>
5
6 namespace jen {
7 struct DeviceBufferPart;
8 struct DeviceBufferAtlas;
9 struct DeviceBufferAllocator;
10 struct DeviceBufferAllocatorData;
11
12 /**
13 * @brief Memory usage types supported by allocator.
14 * Allocator user specifies memory specialization, allocator will select best
15 * available memory type. BufferPart contains used memory properties,
16 * so the user must check if allocated buffer requiries flushing, or
17 * can be used without staging (DEVICE_LOCAL and HOST_VISIBLE).
18 * Buffer can also be non-DEVICE_LOCAL even if STATIC requested,
19 * because the only available memory is non DEVICE_LOCAL.
20 * Type names are based on this recommendations:
21 * https://gpuopen.com/vulkan-device-memory/
22 */
23 enum DevMemUsage :uint8_t {
24 /**
25 * @brief DEVICE_LOCAL static data, fastest device memory.
26 * Definitely exists.
27 * STATIC is used for device local access, while others types
28 * in GpuMemUsage are used for host access.
29 */
30 STATIC,
31 /**
32 * @brief Dynamic data, optimal for transfer to device without staging.
33 * Only memory available on most integrated devices.
34 * Fast for device.
35 */
36 DYNAMIC_DST,
37 /**
38 * @brief Memory for staging transfer to device, slow for device.
39 * Defenetly exists, fallback to DYNAMIC_DST and STAGING_SRC.
40 */
41 STAGING_STATIC_DST,
42 /**
43 * @brief Staging memory, slow for device,
44 * optimal for tranfer from discrete device.
45 */
46 STAGING_SRC
47 };
48 constexpr static const uint8_t GPU_MEM_USAGE_COUNT = 4;
49
50 constexpr static const
51 jl::array<vkw::MemPropMask, GPU_MEM_USAGE_COUNT> GPU_MEM_USAGE_PROPS = {
52 vkw::MemProp::DEVICE_LOCAL,
53
54 vkw::MemProp::DEVICE_LOCAL |
55 vkw::MemProp::HOST_VISIBLE | vkw::MemProp::HOST_COHERENT,
56
57 vkw::MemProp::HOST_VISIBLE | vkw::MemProp::HOST_COHERENT,
58
59 vkw::MemProp::HOST_VISIBLE | vkw::MemProp::HOST_COHERENT |
60 vkw::MemProp::HOST_CACHED,
61 };
62 }
63
64 struct jen::DeviceBufferPart {
65 [[nodiscard]] vkw::DeviceSize size() const {
66 return region.size;
67 }
68 [[nodiscard]] vkw::DeviceSize offset() const {
69 return region.offset;
70 }
71 [[nodiscard]] vkw::DescrBuffer range() const {
72 return {buffer, offset(), size()};
73 }
74 [[nodiscard]] constexpr bool is_mapped() const {
75 return p_mapped;
76 }
77 [[nodiscard]] uint8_t* p_data() {
78 jassert(p_mapped != nullptr,
79 "trying to access non-mapped memory");
80 return reinterpret_cast<uint8_t*>(p_mapped) + offset();
81 }
82 [[nodiscard]] const uint8_t* p_data() const {
83 return const_cast<DeviceBufferPart*>(this)->p_data();
84 }
85 [[nodiscard]] constexpr bool is_flush_needed() const {
86 return not (mem_props & vkw::MemProp::HOST_COHERENT);
87 }
88
89 vkw::Buffer buffer;
90 vkw::Memory memory;
91 vkw::MemPropMask mem_props;
92
93 vkw::BufferUsageMask buffer_usage;
94 uint8_t mem_use_index;
95 protected:
96 friend DeviceBufferAtlas;
97 atlas::Atlas1D::Region region;
98 uint8_t *p_mapped;
99 };
100
101 struct jen::DeviceBufferAllocator
102 {
103 [[nodiscard]] bool
104 init(vkw::Device device, const vkw::DeviceMemProps &devmemprops);
105 void
106 destroy();
107
108 [[nodiscard]] vkw::Result
109 allocate(vkw::DeviceSize size,
110 vkw::DeviceSize alignment,
111 DevMemUsage mem_usage,
112 vkw::BufferUsageMask buffer_usage_mask,
113 bool map_memory,
114 DeviceBufferPart *p_dst);
115
116 void
117 deallocate(const DeviceBufferPart&);
118
119 DeviceBufferAllocatorData *p;
120 };
File include/jen/allocator/memory.h added (mode: 100644) (index 0000000..ea16046)
1 #pragma once
2 #include <vkw/memory.h>
3 #include <atlas/atlas.h>
4
5 namespace jen {
6 struct DeviceMemoryPart;
7 struct DeviceMemoryAllocator;
8 struct DeviceMemoryAllocatorData;
9 }
10
11 struct jen::DeviceMemoryPart {
12 vkw::Memory memory;
13 vkw::MemType type;
14 uint32_t allocator_index;
15 atlas::Atlas1D::Region part;
16 void *p_mapped;
17 };
18
19 struct jen::DeviceMemoryAllocator {
20 [[nodiscard]] bool
21 init(vkw::Device d, const vkw::DeviceMemProps &dmp);
22 void
23 destroy();
24
25 [[nodiscard]] vkw::Result
26 allocate(const vkw::MemReqs&, bool map, DeviceMemoryPart *p_dst);
27 void
28 deallocate(const DeviceMemoryPart &part);
29
30 [[nodiscard]] vkw::Result
31 map_memory(DeviceMemoryPart *p_part);
32
33 DeviceMemoryAllocatorData *p;
34 };
File include/jen/camera.h renamed from src/graphics/draw_data/camera.h (similarity 99%) (mode: 100644) (index 346f37e..15d6f06)
1 1 #pragma once #pragma once
2
3 2 #include <math/vector.h> #include <math/vector.h>
4 3 #include <math/frustum.h> #include <math/frustum.h>
5 4 #include <cstring> #include <cstring>
File include/jen/compute.h added (mode: 100644) (index 0000000..2832caf)
1 #pragma once
2 #include "configuration.h"
3 #if not JEN_MODULE_COMPUTE
4 #error compute module not enabled in this build
5 #endif
6 #include <jen/detail/gpu_image.h>
7 #include <jen/detail/cmd_container.h>
8
9 namespace jen::compute {
10 namespace BindingUseFlag { enum {
11 TRANSFER_SRC = vkw::BufferUsage::TRANSFER_SRC,
12 TRANSFER_DST = vkw::BufferUsage::TRANSFER_DST,
13 UNIFORM_TEXEL = vkw::BufferUsage::UNIFORM_TEXEL,
14 STORAGE_TEXEL = vkw::BufferUsage::STORAGE_TEXEL,
15 UNIFORM = vkw::BufferUsage::UNIFORM,
16 STORAGE = vkw::BufferUsage::STORAGE,
17 }; }
18 using BindingUseMask = uint32_t;
19
20 struct BindingCreateInfo {
21 vkw::DeviceSize size;
22 BindingUseMask use;
23 vkw::BindNo bindingNo;
24 };
25 using BindingCreateInfos = jl::rarray<const BindingCreateInfo>;
26
27 struct BindingBuffer {
28 DeviceBufferPart part;
29 vkw::BindNo binding;
30 bool use_staging;
31 DeviceBufferPart staging;
32 };
33
34 struct BindingBufferView : BindingBuffer {
35 vkw::BufferView view;
36 };
37
38
39 namespace ImageUseFlag { enum {
40 TRANSFER_SRC = vkw::ImUsage::TRANSFER_SRC,
41 TRANSFER_DST = vkw::ImUsage::TRANSFER_DST,
42 STORAGE = vkw::ImUsage::STORAGE,
43 }; }
44 using ImageUseMask = uint32_t;
45
46 struct ImageCreateInfo {
47 math::v3u32 extent;
48 uint32_t layer_count;
49 uint32_t mip_level_count;
50 VkFormat format;
51 vkw::ImType type;
52 vkw::Samples samples;
53 ImageUseMask usage;
54 };
55 using ImageCreateInfos = jl::rarray<const ImageCreateInfo>;
56
57 struct Image {
58 using GpuImage = GpuImage<GpuImageMode::VIEW>;
59 GpuImage image;
60 VkFormat format;
61 vkw::ImLayout layout;
62 DeviceBufferPart staging;
63 uint32_t mip_level_count;
64 uint32_t layer_count;
65 };
66
67 struct BindingImage {
68 BindingImage() = default;
69 constexpr BindingImage(Image *p_image, vkw::BindNo bindingNo)
70 : p_image(p_image), binding(bindingNo) {}
71 Image *p_image;
72 vkw::BindNo binding;
73 };
74
75 struct Bindings {
76 jl::rarray<const BindingBufferView> uniform_texel_buffer;
77 jl::rarray<const BindingBufferView> storage_texel_buffer;
78 jl::rarray<const BindingBuffer> uniform_buffer;
79 jl::rarray<const BindingBuffer> storage_buffer;
80 jl::rarray<const BindingImage> storage_image;
81 };
82
83 struct BindingsSet {
84 vkw::DescrSet set;
85 vkw::DescrPool pool;
86 };
87
88
89 struct Pipeline {
90 vkw::ShaderModule shader;
91 vkw::Pipeline pipeline;
92 vkw::PipelineLayout layout;
93 vkw::DescrLayout setLayout;
94 };
95
96 struct ImageTransfer {
97 uint32_t mip_level;
98 uint32_t layer_offset;
99 uint32_t layer_count;
100 math::v3u32 offset;
101 math::v3u32 extent;
102 void *p_data;
103 };
104 struct ImageTransfers {
105 Image *p_image;
106 jl::rarray<const ImageTransfer> transfers;
107 };
108 using ImagesTransfers = jl::rarray<const ImageTransfers>;
109
110 struct BufferTransfer {
111 BindingBuffer *p_buffer;
112 vkw::DeviceSize offset;
113 vkw::DeviceSize size;
114 void *p_data;
115 };
116 using BufferTransfers = jl::rarray<const BufferTransfer>;
117
118 constexpr static const uint32_t MAX_WORKGROUP_COUNT = 65535;
119 }
120 namespace jen
121 {
122 struct ModuleCompute {
123 [[nodiscard]] Result
124 create_pipeline(const compute::Bindings &bi, const char *p_shader_file_path,
125 vkw::ShaderSpecialization *p_specialization,
126 compute::Pipeline *p_dst);
127 [[nodiscard]] Result
128 create_bindings(compute::BindingCreateInfos infos,
129 compute::BindingBuffer *p_dst);
130 [[nodiscard]] Result
131 create_bindings(compute::BindingCreateInfos infos, VkFormat *p_formats,
132 compute::BindingBufferView *p_dst);
133 [[nodiscard]] Result
134 create_images(compute::ImageCreateInfos infos, compute::Image *p_dst);
135
136 [[nodiscard]] Result
137 create_bindingSet(const compute::Pipeline &pipeline,
138 const compute::Bindings &bindings,
139 compute::BindingsSet *p_dst);
140
141
142 void destroy_bindingSet(compute::BindingsSet *p_set);
143 void destroy_bindings(compute::BindingBuffer *p_bs, uint32_t count = 1);
144 void destroy_bindings(compute::BindingBufferView *p_bs, uint32_t count = 1);
145 void destroy_images(compute::Image *p_ims, uint32_t count = 1);
146 void destroy_pipeline(compute::Pipeline *p_pl);
147
148 Device *p_device;
149 };
150
151 struct ComputeInfo {
152 compute::Pipeline *p_pipeline;
153 compute::BindingsSet *p_bindingsSet;
154 compute::Bindings *p_bindings;
155 math::v3u32 group_count;
156 compute::BufferTransfers buffer_writes;
157 compute::BufferTransfers buffer_reads;
158 compute::ImagesTransfers images_writes;
159 compute::ImagesTransfers images_reads;
160 };
161
162 struct ComputeCmdUnitData;
163
164 struct ComputeCmdUnit {
165 [[nodiscard]] Result
166 init(ModuleCompute mc);
167 void
168 destroy();
169 [[nodiscard]] Result
170 compute(const ComputeInfo&);
171 [[nodiscard]] Result
172 compute_status();
173 [[nodiscard]] Result
174 read_result(compute::BufferTransfers buffer_reads,
175 compute::ImagesTransfers image_reads);
176 ComputeCmdUnitData *p;
177 };
178 }
File include/jen/configuration.h added (mode: 100644) (index 0000000..e7fa35f)
1 #pragma once
2 //This file is generated automatically from /src/configuration.h.in
3 #define JEN_NAME "JEN"
4 #define JEN_VERSION_MAJOR 0
5 #define JEN_VERSION_MINOR 1
6 #define JEN_VERSION_PATCH 0
7 #define JEN_MODULE_GRAPHICS 1
8 #define JEN_MODULE_COMPUTE 0
9 #define JEN_MODULE_RESOURCE_MANAGER 0
File include/jen/controls.h added (mode: 100644) (index 0000000..193a7f2)
1 #pragma once
2 #define GLFW_INCLUDE_VULKAN
3 #include <GLFW/glfw3.h>
4
5 namespace Key
6 {
7 enum State : uint8_t
8 {
9 OFF = GLFW_RELEASE,
10 ON = GLFW_PRESS
11 };
12 enum Board : uint16_t
13 {
14 kSPACE = GLFW_KEY_SPACE,
15 kMINUS = GLFW_KEY_MINUS,
16
17 k0 = GLFW_KEY_0,
18 k1 = GLFW_KEY_1,
19 k2 = GLFW_KEY_2,
20 k3 = GLFW_KEY_3,
21 k4 = GLFW_KEY_4,
22 k5 = GLFW_KEY_5,
23 k6 = GLFW_KEY_6,
24 k7 = GLFW_KEY_7,
25 k8 = GLFW_KEY_8,
26 k9 = GLFW_KEY_9,
27
28 kUp = GLFW_KEY_UP,
29 kDown = GLFW_KEY_DOWN,
30 kLeft = GLFW_KEY_LEFT,
31 kRight = GLFW_KEY_RIGHT,
32
33 kEQUAL = GLFW_KEY_EQUAL,
34
35 A = GLFW_KEY_A,
36 B = GLFW_KEY_B,
37 C = GLFW_KEY_C,
38 D = GLFW_KEY_D,
39 E = GLFW_KEY_E,
40 F = GLFW_KEY_F,
41 G = GLFW_KEY_G,
42 H = GLFW_KEY_H,
43 I = GLFW_KEY_I,
44 J = GLFW_KEY_J,
45 K = GLFW_KEY_K,
46 L = GLFW_KEY_L,
47 M = GLFW_KEY_M,
48 N = GLFW_KEY_N,
49 O = GLFW_KEY_O,
50 P = GLFW_KEY_P,
51 Q = GLFW_KEY_Q,
52 R = GLFW_KEY_R,
53 S = GLFW_KEY_S,
54 T = GLFW_KEY_T,
55 U = GLFW_KEY_U,
56 V = GLFW_KEY_V,
57 W = GLFW_KEY_W,
58 X = GLFW_KEY_X,
59 Y = GLFW_KEY_Y,
60 Z = GLFW_KEY_Z,
61
62 kESCAPE = GLFW_KEY_ESCAPE,
63
64 kBACKSPACE = GLFW_KEY_BACKSPACE,
65
66 kPAUSE = GLFW_KEY_PAUSE,
67
68 f1 = GLFW_KEY_F1,
69 f2 = GLFW_KEY_F2,
70 f3 = GLFW_KEY_F3,
71 f4 = GLFW_KEY_F4,
72 f5 = GLFW_KEY_F5,
73 f6 = GLFW_KEY_F6,
74 f7 = GLFW_KEY_F7,
75 f8 = GLFW_KEY_F8,
76 f9 = GLFW_KEY_F9,
77 f10 = GLFW_KEY_F10,
78 f11 = GLFW_KEY_F11,
79 f12 = GLFW_KEY_F12,
80
81 kCONTROL_L = GLFW_KEY_LEFT_CONTROL,
82 kCONTROL_R = GLFW_KEY_RIGHT_CONTROL
83 };
84
85 enum Mouse : uint8_t
86 {
87 m_1 = GLFW_MOUSE_BUTTON_1,
88 m_L = GLFW_MOUSE_BUTTON_LEFT,
89 m_R = GLFW_MOUSE_BUTTON_RIGHT,
90 m_M = GLFW_MOUSE_BUTTON_MIDDLE
91 };
92 };
File include/jen/detail/cmd_container.h renamed from src/device/cmd_container.h (similarity 97%) (mode: 100644) (index 283dccc..30d2c77)
1 1 #pragma once #pragma once
2
3 2 #include <vkw/timeline.h> #include <vkw/timeline.h>
4 3 #include <vkw/event.h> #include <vkw/event.h>
5 #include "device.h"
4 #include <vkw/fence.h>
5 #include <vkw/semaphore.h>
6 #include <vkw/command_buffer.h>
7 #include <jen/result.h>
6 8
7 9 namespace jen::vk namespace jen::vk
8 10 { {
File include/jen/detail/descriptors.h added (mode: 100644) (index 0000000..c01051e)
1 #pragma once
2 #include <math.h>
3 #include <jlib/darray.h>
4 #include <jlib/threads.h>
5 #include <vkw/descriptor_pool.h>
6 #include <jen/result.h>
7 #include <jen/allocator/buffer.h>
8
9 namespace jen {
10 struct Device;
11 struct DescriptorUniformBuffer;
12 struct DescriptorUniformDynamic;
13 struct DescriptorTexture;
14 struct DescriptorTextureAllocator;
15 struct DescriptorImageView;
16 }
17
18 struct jen::DescriptorUniformBuffer {
19 [[nodiscard]] Result
20 init(Device *p_dev, vkw::DeviceSize size);
21 void destroy(Device *p_dev);
22 DeviceBufferPart allocation;
23 bool isFlushNeeded;
24 };
25
26 struct jen::DescriptorUniformDynamic : DescriptorUniformBuffer
27 {
28 [[nodiscard]] Result
29 init(Device*, vkw::DeviceSize size, uint32_t count,
30 vkw::DescrBind, vkw::DescrPool);
31
32 void
33 destroy(Device*, vkw::DescrPool);
34
35 [[nodiscard]] uint32_t
36 offset(uint32_t index) const {
37 auto offset = aligned_size * index;
38 jassert(offset < allocation.size(),"buffer offset overflow");
39 return uint32_t(offset);
40 }
41 [[nodiscard]] uint8_t*
42 p_data(uint32_t index) {
43 return allocation.p_data() + offset(index);
44 }
45
46 [[nodiscard]] Result
47 flush(Device *p_dev, uint32_t index);
48
49 vkw::DescrSet set;
50 vkw::DescrLayout layout;
51 vkw::DeviceSize aligned_size;
52 vkw::DeviceSize single_size;
53 vkw::DeviceSize size;
54 };
55
56 struct jen::DescriptorTexture {
57 vkw::DescrPool pool;
58 vkw::DescrSet set;
59 };
60 struct jen::DescriptorTextureAllocator
61 {
62 using Set = DescriptorTexture;
63
64 [[nodiscard]] Result init(vkw::Device);
65 void destroy(vkw::Device);
66
67 [[nodiscard]] Result
68 create(vkw::Device, vkw::Sampler, vkw::ImView, Set *p_dst);
69 void destroy(vkw::Device, Set);
70
71
72 struct Pool
73 {
74 static constexpr uint_fast8_t MAX = 255;
75
76 [[nodiscard]] Result init(vkw::Device device);
77 void destroy(vkw::Device device) { pool.destroy(device); }
78
79 vkw::DescrPool pool;
80 uint_fast8_t consumed;
81 };
82
83 jl::darray<Pool> pools;
84 jth::Spinlock lock;
85 vkw::DescrLayout layout;
86 };
87
88 struct jen::DescriptorImageView
89 {
90 constexpr static const
91 auto DESCR_TYPE = vkw::DescrType::INPUT_ATTACHMENT;
92 constexpr static const
93 auto DESCR_TYPE_SAMPLER = vkw::DescrType::COMBINED_IMAGE_SAMPLER;
94
95 [[nodiscard]] Result
96 init(vkw::Device, vkw::DescrPool, vkw::ImView, vkw::Sampler = {});
97 void
98 update(vkw::Device, vkw::ImView, vkw::Sampler = {});
99 void
100 destroy(vkw::Device, vkw::DescrPool);
101
102 vkw::DescrSet set;
103 vkw::DescrLayout layout;
104 };
105
File include/jen/detail/gpu_image.h added (mode: 100644) (index 0000000..3282b71)
1 #pragma once
2 #include <jen/allocator/memory.h>
3 #include <jen/detail/descriptors.h>
4
5 namespace jen
6 {
7 struct Device;
8
9 struct GpuImageInfo {
10 vkw::Extent3D extent;
11 uint32_t layer_count;
12 uint32_t mip_level_count;
13 VkFormat format;
14 vkw::ImType type;
15 vkw::Samples samples;
16 vkw::ImUsageMask usage;
17 vkw::ImMask flags;
18 vkw::Tiling tiling;
19 };
20
21 struct GpuImageViewInfo {
22 vkw::ImViewType type;
23 vkw::ImAspectMask aspect;
24 };
25 struct GpuImageDescrInfo {
26 vkw::DescrPool pool;
27 };
28 }
29 namespace jen::detail
30 {
31 struct GpuImageExtraImage {
32 [[nodiscard]] Result
33 init_image(Device*, const GpuImageInfo&);
34 void destroy_image(vkw::Device d, DeviceMemoryAllocator a) {
35 a.deallocate(memory);
36 image.destroy(d);
37 }
38
39 DeviceMemoryPart memory;
40 vkw::Image image;
41 };
42 template<bool> struct GpuImageExtraView {
43 [[nodiscard]] constexpr Result
44 init_view(vkw::Device, const GpuImageInfo&,
45 vkw::Image, const GpuImageViewInfo&) {
46 return VK_SUCCESS;
47 }
48 void destroy_view(vkw::Device) {}
49 protected:
50 constexpr static const vkw::ImView view = {};
51 };
52 template<> struct GpuImageExtraView<true> {
53 [[nodiscard]] Result
54 init_view(vkw::Device d, const GpuImageInfo &ii,
55 vkw::Image im, const GpuImageViewInfo &vi) {
56 vkw::ImRange range {vi.aspect, ii.layer_count, ii.mip_level_count};
57 return view.init(d, im, vi.type, ii.format, range);
58 }
59 void destroy_view(vkw::Device d) {view.destroy(d);}
60 vkw::ImView view;
61 };
62
63 template<bool> struct GpuImageExtraSampler {
64 [[nodiscard]] constexpr Result
65 init_sampler(vkw::Device, const vkw::SamplerInfo&) {return VK_SUCCESS;}
66 void
67 destroy_sampler(vkw::Device) {}
68 protected:
69 constexpr static const vkw::Sampler sampler = {};
70 };
71 template<> struct GpuImageExtraSampler<true> {
72 [[nodiscard]] Result
73 init_sampler(vkw::Device d, const vkw::SamplerInfo &si) {
74 return sampler.init(d, si);
75 }
76 void
77 destroy_sampler(vkw::Device d) {sampler.destroy(d);}
78 vkw::Sampler sampler;
79 };
80
81 template<bool> struct GpuImageExtraDescriptor {
82 [[nodiscard]] constexpr Result
83 init_descr(vkw::Device,const GpuImageDescrInfo&,vkw::ImView,vkw::Sampler) {
84 return VK_SUCCESS;
85 }
86 void destroy_descr(vkw::Device, vkw::DescrPool) {}
87 };
88 template<> struct GpuImageExtraDescriptor<true> {
89 [[nodiscard]] Result
90 init_descr(vkw::Device d, const GpuImageDescrInfo &i, vkw::ImView v,
91 vkw::Sampler s) {
92 jassert(i.pool, "descriptor is used, but pool is invalid");
93 return descriptor.init(d, i.pool, v, s);
94 }
95 void
96 destroy_descr(vkw::Device d, vkw::DescrPool p) {
97 jassert(p, "descriptor is used, but pool is invalid");
98 descriptor.destroy(d, p);
99 }
100 DescriptorImageView descriptor;
101 };
102 enum GpuImageExtras {
103 NONE, VIEW = 1, SAMPLER = 0b10, DESCRIPTOR = 0b100
104 };
105 }
106 namespace jen
107 {
108 enum GpuImageMode {
109 NONE,
110 VIEW = detail::GpuImageExtras::VIEW,
111 SAMP = VIEW | detail::GpuImageExtras::SAMPLER,
112 DESCR = VIEW | detail::GpuImageExtras::DESCRIPTOR,
113 SAMP_DESCR = SAMP | DESCR
114 };
115 template<GpuImageMode M = GpuImageMode::NONE>
116 struct GpuImage :
117 detail::GpuImageExtraImage,
118 detail::GpuImageExtraView<((M & detail::GpuImageExtras::VIEW) > 0)>,
119 detail::GpuImageExtraSampler<((M & detail::GpuImageExtras::SAMPLER) > 0)>,
120 detail::GpuImageExtraDescriptor<((M&detail::GpuImageExtras::DESCRIPTOR) >0)>
121 {
122 [[nodiscard]] Result
123 init( Device *p_dd,
124 const GpuImageInfo *p_ii,
125 const GpuImageViewInfo *p_vi = {},
126 const vkw::SamplerInfo *p_si = {},
127 const GpuImageDescrInfo *p_di = {});
128 void
129 destroy(Device *p_d, vkw::DescrPool pool = {});
130 };
131
132 #define EXTERN_DEF(x) \
133 [[nodiscard]] extern template Result GpuImage<GpuImageMode:: x >:: \
134 init( Device*, \
135 const GpuImageInfo*, \
136 const GpuImageViewInfo*, \
137 const vkw::SamplerInfo*, \
138 const GpuImageDescrInfo*); \
139 extern template void GpuImage<GpuImageMode:: x >:: \
140 destroy(Device*, vkw::DescrPool);
141
142 EXTERN_DEF(NONE)
143 EXTERN_DEF(VIEW)
144 EXTERN_DEF(SAMP)
145 EXTERN_DEF(DESCR)
146 EXTERN_DEF(SAMP_DESCR)
147
148 #undef EXTERN_DEF
149 }
File include/jen/framework.h changed (mode: 100644) (index 33d9f96..0e02a3e)
1 1 #pragma once #pragma once
2
3 #include "../../src/instance/instance.h"
4 #include "../../src/device/device.h"
5 #include "../../src/graphics/graphics.h"
6 #include "../../src/compute/compute.h"
7 #include "../../src/settings.h"
8
9 #include <jlib/time.h>
10
2 #include "configuration.h"
3 #if JEN_MODULE_GRAPHICS
4 #include "graphics.h"
5 #endif
6 #if JEN_MODULE_COMPUTE
7 #include "compute.h"
8 #endif
9 #include "settings.h"
10 #include "window.h"
11 #if JEN_MODULE_RESOURCE_MANAGER
12 #include "resource_manager.h"
13 #endif
11 14
12 15 namespace jen { namespace jen {
13 struct Framework
14 {
16 struct Instance;
17 struct Device;
18
19 struct Framework {
15 20 [[nodiscard]] bool [[nodiscard]] bool
16 21 init(ModulesMask modules_mask, const Settings &settings); init(ModulesMask modules_mask, const Settings &settings);
17 22 void destroy(); void destroy();
18 23
19 Instance instance;
20 vk::Device device;
21 ModuleGraphics *p_graphics;
22 ModuleCompute *p_compute;
23 };
24 [[nodiscard]] Window* get_window();
24 25
26 Instance *p_instance;
27 Device *p_device;
28 #if JEN_MODULE_GRAPHICS
29 ModuleGraphics graphics;
30 #endif
31 #if JEN_MODULE_COMPUTE
32 ModuleCompute compute;
33 #endif
34 #if JEN_MODULE_RESOURCE_MANAGER
35 ModuleResourceManager resource_manager;
36 #endif
37 };
25 38 } }
26 39
File include/jen/graphics.h added (mode: 100644) (index 0000000..67bdb8f)
1 #pragma once
2 #include "configuration.h"
3 #if not JEN_MODULE_GRAPHICS
4 #error graphics module not enabled in this build
5 #endif
6 #include "camera.h"
7 #include "light.h"
8 #include "resources.h"
9 #include "result.h"
10 #include <jlib/time.h>
11 #include <jrf/image.h>
12
13 namespace jen {
14 struct DebugOverlay;
15 struct GraphicsData;
16 struct ModuleGraphics;
17 }
18 struct jen::ModuleGraphics
19 {
20 [[nodiscard]] Result apply_settings();
21 void apply_camera(const Camera&, const Frustum&);
22 void apply_light_shadow(const Light&);
23 void apply_lights(LightsDraw *p_lights);
24
25 [[nodiscard]] Result
26 create(const WriteData&, GpuData **pp_dst, bool free_source);
27
28 /// @param p_allocated externally allocated GpuData memory,
29 /// will be deallocated after destroy(GpuData*,bool)
30 [[nodiscard]] Result
31 create(const WriteData&, GpuData *p_allocated, bool free_source);
32
33 [[nodiscard]] Result
34 create(const jrf::Image *p_texture, GpuTexture **pp_dst, bool free_src);
35
36 [[nodiscard]] bool
37 create(const char* font_path, GlyphManager **pp_dst);
38
39 /// @param pp_text Valid handle or nullptr.
40 /// after calling with nullptr important to fill Text.data member
41 /// Text.data can be changed at any time for changing rendering options
42 /// for next frame draw
43 [[nodiscard]] Result
44 text_update(TextLayout layout, uint16_t pixel_size, Chars chars,
45 Colors_RGBA colors, GlyphManager *p_font, GpuText **pp_text);
46
47 void destroy(GlyphManager *p_font);
48 void destroy(GpuText *p_text);
49
50 void destroy(GpuTexture*, bool destroy_src_image);
51 void destroy(GpuData*, bool destroy_source);
52
53 [[nodiscard]] Result draw_frame(const jl::rarray<const Model> &models);
54
55 [[nodiscard]] Result update_settings_from_input();
56
57 using PF_User = void(*)(void*);
58
59 struct Loop {
60 void run(ModuleGraphics mg, void *p_update_arg, PF_User pf_update);
61
62 Result result;
63 jl::time last_update_time;
64 jl::time elapsed_after_update;
65 bool pause;
66 bool is_drawn;
67 bool draw;
68 bool break_loop;
69 bool wait_events;
70 jl::rarray<const Model> models;
71 };
72
73 GraphicsData *p;
74 };
File include/jen/jrl.h deleted (index 43cf5e6..0000000)
1 #include "../../src/graphics/jrl_defs.h"
2
3 namespace jen { struct ResourceManager; };
4
5 struct jen::ResourceManager
6 {
7 void init(ModuleGraphics *p_mg);
8 void destroy();
9
10 struct [[nodiscard]] Result {
11 vkw::Result vk;
12 jrf::Result jrf;
13 operator bool () { return vk == VK_SUCCESS; }
14 };
15
16 Result create(const jl::string_ro &path, Resource<jrf::IMAGE> *p_dst);
17 Result create(const jl::string_ro &path, Resource<jrf::VERTICES> *p_dst);
18 Result create(const jl::string_ro &path, Resource<jrf::INDICES> *p_dst);
19
20 void destroy(Resource<jrf::IMAGE> *p_res);
21 void destroy(Resource<jrf::VERTICES> *p_res);
22 void destroy(Resource<jrf::INDICES> *p_res);
23
24
25 Result create_mesh(const jl::string_ro &mesh_path,
26 jen::VertexData *p_dst_ver,
27 jen::IndexData *p_dst_ind);
28
29 void destroy_mesh(const jl::string_ro &mesh_path);
30
31 Result create_model(const jl::string_ro &model_path,
32 jen::VertexData *p_dst_ver,
33 jen::IndexData *p_dst_ind,
34 jen::TextureData *p_dst_img);
35
36 void destroy_model(const jl::string_ro &model_path);
37
38 using SceneData = jl::rarray<jen::Model>;
39
40 Result create_scene(const jl::string_ro &path,
41 jen::ShiftPO2 shift_scale,
42 SceneData *p_dst);
43
44 void destroy_scene(const jl::string_ro &scene_path);
45
46 private:
47
48 template<jrf::ResourceType RT>
49 using Storage = jl::darray_sorted<detail::ResHandle<RT>>;
50
51 Storage<jrf::IMAGE> images;
52 Storage<jrf::VERTICES> vertices;
53 Storage<jrf::INDICES> indices;
54 Storage<jrf::MESH> meshes;
55 Storage<jrf::MODEL> models;
56 Storage<jrf::SCENE> scenes;
57 ModuleGraphics *p_moduleGraphics;
58
59 void get_storage(Storage<jrf::IMAGE> **p_p) { *p_p = &images; }
60 void get_storage(Storage<jrf::VERTICES> **p_p) { *p_p = &vertices;}
61 void get_storage(Storage<jrf::INDICES> **p_p) { *p_p = &indices; }
62 void get_storage(Storage<jrf::MODEL> **p_p) { *p_p = &models; }
63
64 template<jrf::ResourceType RT>
65 [[nodiscard]] vkw::Result
66 insert( jl::string *p_path_move,
67 typename jrf::Resource<RT>::T *p_jrf_resource,
68 detail::ResHandle<RT> *p_dst);
69 template<jrf::ResourceType RT>
70 Result create_res(jl::string *p_path, detail::ResHandle<RT> *p_dst);
71 template<jrf::ResourceType RT>
72 Result create(const jl::string_ro &path, Resource<RT> *p_dst);
73 template<jrf::ResourceType RT>
74 Result create(const jl::string_ro &path, detail::ResHandle<RT> *p_dst);
75 template<jrf::ResourceType RT>
76 Result create(jl::string *p_path_move, detail::ResHandle<RT> **pp_dst);
77 template<jrf::ResourceType RT>
78 void destroy(const jl::string_ro &path);
79
80 template<jrf::ResourceType RT, typename RM_res, typename Render_Res>
81 Result
82 create_res_ref(jrf::Data<typename jrf::Resource<RT>::T> *p_jrf,
83 jen::detail::ResRef<RM_res> *p_dst,
84 Render_Res *p_dst2);
85
86 template<jrf::ResourceType RT>
87 [[nodiscard]] bool
88 find_and_apply_ref_count(const jl::string_ro &path,
89 detail::ResHandle<RT> *p_dst);
90
91 Result
92 create_mesh_res(jrf::Mesh *p_jrf_src,
93 jen::detail::RM_Mesh *p_rm_mesh,
94 jen::VertexData *p_dst_ver,
95 jen::IndexData *p_dst_ind);
96 void
97 mesh_resources(const jen::detail::RM_Mesh &mesh,
98 jen::VertexData *p_dst_ver,
99 jen::IndexData *p_dst_ind);
100 [[nodiscard]] bool
101 find_and_apply_ref_count(const jl::string_ro &mesh_path,
102 jen::VertexData *p_dst_ver,
103 jen::IndexData *p_dst_ind);
104
105 [[nodiscard]] bool
106 find_and_apply_ref_count(const jl::string_ro &model_path,
107 jen::VertexData *p_dst_ver,
108 jen::IndexData *p_dst_ind,
109 jen::TextureData *p_dst_img);
110
111 Result create_mesh_res(jl::string *p_path,
112 jen::VertexData *p_dst_ver,
113 jen::IndexData *p_dst_ind);
114
115 Result create_model_res(jl::string *p_path,
116 jen::VertexData *p_dst_ver,
117 jen::IndexData *p_dst_ind,
118 jen::TextureData *p_dst_img);
119 };
File include/jen/light.h added (mode: 100644) (index 0000000..3175047)
1 #pragma once
2 #include <math/vector.h>
3 #include <jlib/rarray.h>
4
5 namespace jen {
6 constexpr static const uint32_t MAX_LIGHTS_COUNT = 512;
7 constexpr static const uint32_t MAX_LIGHTS_COUNT_IN_CLUSTER = 128;
8
9 struct Light {
10 math::v3f pos;
11 float radius;
12 math::v4f color;
13 float znear;
14 float zfar;
15 float __junk[2];
16
17 [[nodiscard]] bool operator == (const Light &l) {
18 return memcmp(this, &l, offsetof(Light,__junk)) == 0;
19 }
20 [[nodiscard]] bool operator != (const Light &l) {
21 return not (*this == l);
22 }
23 };
24 static_assert(sizeof(Light) % 16 == 0);
25
26 using Lights = jl::rarray<const Light>;
27 struct LightsDraw {
28 Lights lights;
29 bool is_updated;
30 };
31 }
File include/jen/resource_manager.h added (mode: 100644) (index 0000000..20b87a9)
1 #pragma once
2 #include "configuration.h"
3 #if not JEN_MODULE_RESOURCE_MANAGER or not JEN_MODULE_GRAPHICS
4 #error resource manager or graphics module not enabled in this build
5 #endif
6 #include "graphics.h"
7 #include "jlib/string.h"
8 #include <jrf/scene.h>
9
10 namespace jen::detail
11 {
12 template<typename T>
13 struct ResRef {
14 union {
15 T res;
16 jl::string path;
17 } u;
18 bool is_data;
19 };
20
21 using RM_Image = TextureData;
22 using RM_Vertices = VertexData;
23 using RM_Indices = IndexData;
24 struct RM_Mesh {
25 ResRef<RM_Vertices> ver;
26 ResRef<RM_Indices> ind;
27 };
28 struct RM_Model {
29 ResRef<RM_Mesh> mesh;
30 ResRef<RM_Image> image;
31 };
32 using RM_Scene = jrf::Scene;
33
34
35 template<jrf::ResourceType>
36 struct RM_Resource { using T = void; };
37 template<>
38 struct RM_Resource<jrf::IMAGE> { using T = RM_Image; };
39 template<>
40 struct RM_Resource<jrf::VERTICES> { using T = RM_Vertices; };
41 template<>
42 struct RM_Resource<jrf::INDICES> { using T = RM_Indices; };
43 template<>
44 struct RM_Resource<jrf::MESH> { using T = RM_Mesh; };
45 template<>
46 struct RM_Resource<jrf::MODEL> { using T = RM_Model; };
47 template<>
48 struct RM_Resource<jrf::SCENE> { using T = RM_Scene; };
49 }
50
51 namespace jen {
52 template<jrf::ResourceType RT>
53 struct Resource {
54 constexpr static const jrf::ResourceType TYPE = RT;
55 using T = typename detail::RM_Resource<RT>::T;
56 operator const T& () { return res; }
57
58 jl::string path;
59 T res;
60 };
61 struct ModuleResourceManager;
62 struct ResourceManagerData;
63 }
64
65 struct jen::ModuleResourceManager
66 {
67 struct [[nodiscard]] Result {
68 jen::Result jen;
69 jrf::Result jrf;
70 operator bool () { return jen == VK_SUCCESS; }
71 };
72
73 Result create(const jl::string_ro &path, Resource<jrf::IMAGE> *p_dst);
74 Result create(const jl::string_ro &path, Resource<jrf::VERTICES> *p_dst);
75 Result create(const jl::string_ro &path, Resource<jrf::INDICES> *p_dst);
76
77 void destroy(Resource<jrf::IMAGE> *p_res);
78 void destroy(Resource<jrf::VERTICES> *p_res);
79 void destroy(Resource<jrf::INDICES> *p_res);
80
81
82 Result create_mesh(const jl::string_ro &mesh_path,
83 jen::VertexData *p_dst_ver,
84 jen::IndexData *p_dst_ind);
85
86 void destroy_mesh(const jl::string_ro &mesh_path);
87
88 Result create_model(const jl::string_ro &model_path,
89 jen::VertexData *p_dst_ver,
90 jen::IndexData *p_dst_ind,
91 jen::TextureData *p_dst_img);
92
93 void destroy_model(const jl::string_ro &model_path);
94
95 using SceneData = jl::rarray<jen::Model>;
96
97 Result create_scene(const jl::string_ro &path,
98 jen::ShiftPO2 shift_scale,
99 SceneData *p_dst);
100
101 void destroy_scene(const jl::string_ro &scene_path);
102
103 ResourceManagerData *p;
104 };
File include/jen/resources.h added (mode: 100644) (index 0000000..d68448a)
1 #pragma once
2 #include <math/vector.h>
3 #include <math/matrix.h>
4 #include <jlib/array.h>
5 #include <jlib/rarray.h>
6
7 namespace jen {
8 enum class [[nodiscard]] ResourceState : uint8_t {
9 LOADING = 0b00,
10 DONE = 0b01
11 };
12 struct GpuData;
13 constexpr static const uint64_t GPU_DATA_ALLOCATION_SIZE = 88;
14 struct GpuTexture;
15 constexpr static const uint64_t GPU_TEXTURE_ALLOCATION_SIZE = 120;
16
17 struct WriteData {
18 void *p;
19 uint64_t size;
20 };
21
22 ResourceState resource_state(const GpuData * const);
23 ResourceState resource_state(const GpuTexture* const);
24
25 [[nodiscard]] inline bool is_resource_ready(const GpuData*const p) {
26 return resource_state(p) == ResourceState::DONE;
27 }
28 [[nodiscard]] inline bool is_resource_ready(const GpuTexture*const p) {
29 return resource_state(p) == ResourceState::DONE;
30 }
31
32
33 struct GlyphManager;
34
35 using Chars = jl::rarray<const uint32_t>;
36 using Colors_RGBA = jl::rarray<const uint32_t>;
37
38 struct TextOffsetMode {
39 enum class X : uint8_t { LEFT, CENTER, RIGHT } x;
40 enum class Y : uint8_t { TOP, CENTER, BOTTOM } y;
41 };
42
43 enum class TextLayout : uint8_t { LEFT, CENTER, RIGHT };
44
45 struct TextPosition {
46 math::v2f offset;
47 TextOffsetMode text_offset_mode;
48 TextOffsetMode screen_offset_mode;
49 };
50
51 struct GpuText;
52
53
54 enum VAttr : uint8_t { POSITION, TEX_COORD, NORMAL, TEX_IND, TEX_SCALE };
55 constexpr static const uint8_t VATTR_TYPE_COUNT = 5;
56 using VAttrsOffsets = jl::array<uint64_t, VATTR_TYPE_COUNT>;
57
58 struct VertexData {
59 GpuData *p_data;
60 VAttrsOffsets offsets;
61 uint32_t count;
62 };
63
64 enum class IndexType { U16 = 0, U32 = 1 };
65
66 struct IndexData {
67 GpuData *p_data;
68 uint64_t offset;
69 uint32_t count;
70 IndexType type;
71 };
72 struct TextureData {
73 GpuTexture *p_data;
74 uint32_t layer_index;
75 };
76 struct ModelWorld {
77 math::m4f transform;
78 math::v3f position;
79 math::v3i32 position_shift;
80 };
81
82 struct Model {
83 VertexData ver;
84 IndexData ind;
85 TextureData tex;
86 ModelWorld world;
87
88 [[nodiscard]] bool is_ready_to_draw() const {
89 if (not is_resource_ready(tex.p_data))
90 return false;
91 if (not is_resource_ready(ver.p_data))
92 return false;
93 if ( ind.p_data != nullptr
94 and not is_resource_ready(ind.p_data)
95 and ind.count != 0)
96 return false;
97
98 return true;
99 }
100 };
101 }
File include/jen/result.h added (mode: 100644) (index 0000000..a63a5b9)
1 #pragma once
2 #include <vkw/result.h>
3
4 namespace jen {
5 using Result = vkw::Result;
6 }
File include/jen/screen.h changed (mode: 100644) (index 567b98a..b217a9a)
1 1 #pragma once #pragma once
2
3 #include "framework.h"
2 #include "camera.h"
3 #include "window.h"
4 4
5 5 namespace jen::screen { namespace jen::screen {
6 6 struct Noclip; struct Noclip;
File include/jen/settings.h added (mode: 100644) (index 0000000..c24770b)
1 #pragma once
2 #include "controls.h"
3 #include <memory>
4
5 namespace jen {
6 struct ApplicationSettings;
7 struct ThreadPoolSettings;
8 struct WindowSettings;
9 struct GraphicsSettings;
10
11 namespace ModulesFlag { enum T : uint32_t {
12 COMPUTE = 1,
13 GRAPHICS = 2,
14 RESOURCE_MANAGER = 4
15 }; }
16 using ModulesMask = uint32_t;
17
18 struct Version {
19 Version() = default;
20 constexpr Version(uint16_t major, uint16_t minor, uint16_t patch)
21 : major(major), minor(minor), patch(patch) {}
22 uint16_t major;
23 uint16_t minor;
24 uint16_t patch;
25 };
26
27 struct Settings;
28 }
29
30 struct jen::ApplicationSettings {
31 const char *p_name_str;
32 Version version;
33 };
34
35 struct jen::ThreadPoolSettings {
36 struct Indices {
37 uint32_t drawFrame;
38 };
39 uint32_t queues_count;
40 uint32_t threads_count;
41 Indices queue_indices;
42 };
43
44 struct jen::WindowSettings {
45 const char *p_title_str;
46 };
47
48 struct jen::GraphicsSettings
49 {
50 struct DebugOverlay {
51 bool is_enabled;
52 bool is_visible;
53 Key::Board toggle_key;
54 const char *font_path;
55 };
56
57 enum class Shading : uint32_t {
58 DEFAULT,
59 NO_LIGHTING,
60 DEBUG_TEXTURE_COORDINATES,
61 DEBUG_CLUSTERS_DEPTH,
62 DEBUG_CLUSTERS_NUM_LIGHTS,
63 COUNT
64 };
65
66 enum class Filter : uint32_t { _1, _16, _25, _32, _64, _100, _128 };
67 struct Shadow {
68 float bias = 0.05f;
69 Filter pcss_search = Filter::_16;
70 Filter pcf = Filter::_32;
71 uint32_t extent = 512;
72 };
73
74 enum class DrawMode : uint8_t {
75 DEFAULT, WIREFRAME, POINTS
76 };
77 enum class CullMode : uint8_t {
78 NONE,
79 FRONT = 1,
80 BACK = 2,
81 FRONT_AND_BACK = FRONT | BACK
82 };
83
84 Shading shading = Shading::DEFAULT;
85 Shadow shadows;
86 DrawMode draw_mode;
87 CullMode cull_mode;
88 uint8_t multisampling;
89 bool is_vSync_enabled;
90 bool wait_for_gpu_frame_draw;
91 bool wait_for_monitor;
92 bool is_debug_normals_visible;
93 bool is_debug_depth_cube_visible;
94 DebugOverlay debug_overlay;
95
96 [[nodiscard]] bool operator ==(const GraphicsSettings& settings) const {
97 return memcmp(this, &settings, sizeof(*this)) == 0;
98 }
99 [[nodiscard]] bool operator !=(const GraphicsSettings& settings) const {
100 return not operator==(settings);
101 }
102 };
103
104 struct jen::Settings {
105 constexpr void
106 set_default(const ApplicationSettings &app_settings) {
107 application = app_settings;
108
109 thread_pool.queues_count = 1;
110 thread_pool.threads_count = 0;
111 thread_pool.queue_indices.drawFrame = 0;
112
113 window.p_title_str = "";
114
115 graphics.draw_mode = GraphicsSettings::DrawMode::DEFAULT;
116 graphics.cull_mode = GraphicsSettings::CullMode::BACK;
117 graphics.shading = GraphicsSettings::Shading::DEFAULT;
118 graphics.shadows.bias = 0.05f;
119 graphics.shadows.pcss_search = GraphicsSettings::Filter::_16;
120 graphics.shadows.pcf = GraphicsSettings::Filter::_32;
121 graphics.shadows.extent = 512;
122 graphics.multisampling = 1;
123 graphics.is_vSync_enabled = true;
124 graphics.wait_for_gpu_frame_draw = true;
125 graphics.wait_for_monitor = true;
126 graphics.is_debug_normals_visible = false;
127 graphics.is_debug_depth_cube_visible = false;
128
129 graphics.debug_overlay.is_enabled = true;
130 graphics.debug_overlay.is_visible = false;
131 graphics.debug_overlay.toggle_key = Key::Board::f1;
132 graphics.debug_overlay.font_path = "fonts//IBMPlexMono.ttf";
133 }
134 [[nodiscard]] constexpr static Settings
135 get_default(const ApplicationSettings &app_settings) {
136 jen::Settings s = {};
137 s.set_default(app_settings);
138 return s;
139 }
140
141 ApplicationSettings application;
142 ThreadPoolSettings thread_pool;
143 WindowSettings window;
144 GraphicsSettings graphics;
145 };
File include/jen/window.h added (mode: 100644) (index 0000000..58723eb)
1 #pragma once
2 #include "controls.h"
3 #include <math/vector.h>
4 #include <vkw/surface.h>
5
6 struct Window
7 {
8 enum class CursorMode {
9 NORMAL = GLFW_CURSOR_NORMAL,
10 HIDDEN = GLFW_CURSOR_HIDDEN,
11 DISABLED = GLFW_CURSOR_DISABLED
12 };
13 struct InputMode {
14 CursorMode cursor;
15 };
16
17 using Cursor = math::v2d;
18 using Extent = math::vec2<int>;
19 constexpr static const Extent ExtentAny = { GLFW_DONT_CARE, GLFW_DONT_CARE };
20
21 [[nodiscard]] static bool init_glfw()
22 {
23 glfwSetErrorCallback([](int, const char* error) {
24 fprintf(stderr, "GLFW_ERROR: %s\n", error);
25 });
26 return glfwInit();
27 }
28
29 [[nodiscard]] bool init(Extent extent_, const char* title, bool is_visible_)
30 {
31 is_visible = is_visible_;
32 extent = extent_;
33
34 p_monitor = glfwGetPrimaryMonitor();
35 p_video_mode = glfwGetVideoMode(p_monitor);
36
37 glfwWindowHint(GLFW_CLIENT_API, GLFW_NO_API);
38 glfwWindowHint(GLFW_RESIZABLE, GLFW_TRUE);
39 glfwWindowHint(GLFW_VISIBLE, is_visible);
40 p_window = glfwCreateWindow(extent.x, extent.y, title, {}, {});
41
42
43 glfwSetWindowUserPointer(p_window, this);
44 glfwSetWindowSizeCallback(p_window, [](GLFWwindow* p_w, int w, int h) {
45 Window *p = reinterpret_cast<Window*>(glfwGetWindowUserPointer(p_w));
46 p->extent.x = w;
47 p->extent.y = h;
48 });
49 glfwSetFramebufferSizeCallback(p_window, [](GLFWwindow* p_w, int w, int h) {
50 Window *p = reinterpret_cast<Window*>(glfwGetWindowUserPointer(p_w));
51 p->framebuffer_extent.x = w;
52 p->framebuffer_extent.y = h;
53 });
54 glfwSetWindowPosCallback(p_window, [](GLFWwindow* p_w, int w, int h) {
55 Window *p = reinterpret_cast<Window*>(glfwGetWindowUserPointer(p_w));
56 p->position.x = w;
57 p->position.y = h;
58 });
59 glfwSetWindowIconifyCallback(p_window, [](GLFWwindow* p_w, int iconified) {
60 Window *p = reinterpret_cast<Window*>(glfwGetWindowUserPointer(p_w));
61 p->is_iconified = iconified;
62 });
63 glfwSetWindowFocusCallback(p_window, [](GLFWwindow* p_w, int focus) {
64 Window *p = reinterpret_cast<Window*>(glfwGetWindowUserPointer(p_w));
65 p->is_focused = focus;
66 });
67 glfwSetWindowRefreshCallback(p_window, [](GLFWwindow *p_w) {
68 Window *p = reinterpret_cast<Window*>(glfwGetWindowUserPointer(p_w));
69 p->is_damaged = true;
70 });
71 glfwSetCursorPosCallback(p_window, [](GLFWwindow *p_w, double x, double y) {
72 Window *p = reinterpret_cast<Window*>(glfwGetWindowUserPointer(p_w));
73 p->cursor = {x, y};
74 });
75
76 glfwGetCursorPos(p_window, &cursor.x, &cursor.y);
77 glfwGetWindowSize(p_window, &extent.x, &extent.y);
78 glfwGetFramebufferSize(p_window,
79 &framebuffer_extent.x, &framebuffer_extent.y);
80 glfwGetWindowPos(p_window, &position.x, &position.y);
81
82 #if GLFW_VERSION_MINOR > 2
83 if (glfwRawMouseMotionSupported())
84 glfwSetInputMode(p_window, GLFW_RAW_MOUSE_MOTION, GLFW_TRUE);
85 #endif
86
87 input_mode.cursor = get_cursor_mode();
88 return p_window != nullptr;
89 }
90
91 void set_visibility(bool is_visible_) {
92 is_visible = is_visible_;
93 if (is_visible)
94 glfwShowWindow(p_window);
95 else
96 glfwHideWindow(p_window);
97 }
98
99 void set_extent_limits(Extent min = ExtentAny, Extent max = ExtentAny) const {
100 glfwSetWindowSizeLimits(p_window, min.x, min.y, max.x, max.y);
101 }
102
103 [[nodiscard]] vkw::Result
104 create_surface(vkw::Instance ins, vkw::Surface *p_dst) const {
105 return glfwCreateWindowSurface(ins, p_window, nullptr, &p_dst->p_vk);
106 }
107
108 void destroy() {
109 glfwDestroyWindow(p_window);
110 }
111
112 [[nodiscard]] Key::State state(Key::Board key) const {
113 return Key::State(glfwGetKey(p_window, key));
114 }
115 [[nodiscard]] Key::State state(Key::Mouse key) const {
116 return Key::State(glfwGetMouseButton(p_window, key));
117 }
118 [[nodiscard]] bool is_on(Key::Board key) const {
119 return state(key) == Key::State::ON;
120 }
121
122 [[nodiscard]] bool is_on(Key::Mouse key) const {
123 return state(key) == Key::State::ON;
124 }
125
126 [[nodiscard]] bool is_off(Key::Board key) const {
127 return state(key) == Key::State::OFF;
128 }
129
130 [[nodiscard]] bool is_off(Key::Mouse key) const {
131 return state(key) == Key::State::OFF;
132 }
133
134 static void poll() {
135 glfwPollEvents();
136 }
137
138 static void wait() {
139 glfwWaitEvents();
140 }
141
142 [[nodiscard]] bool is_window_close_fired() const {
143 return glfwWindowShouldClose(p_window) == GLFW_TRUE;
144 }
145
146 void toggle_fullscreen() {
147 if (is_fullscreen)
148 set_windowed();
149 else
150 set_fullscreen();
151 }
152
153 void set_fullscreen() {
154 if (not is_fullscreen) {
155 old_window_data.extent = extent;
156 old_window_data.framebuffer_extent = framebuffer_extent;
157 old_window_data.position = position;
158 glfwSetWindowMonitor(p_window, p_monitor, 0, 0,
159 p_video_mode->width, p_video_mode->height,
160 GLFW_DONT_CARE);
161 is_fullscreen = true;
162 }
163 }
164
165 void set_windowed() {
166 if (is_fullscreen) {
167 glfwSetWindowMonitor(p_window, nullptr, old_window_data.position.x,
168 old_window_data.position.y,
169 old_window_data.extent.x, old_window_data.extent.y,
170 GLFW_DONT_CARE);
171 is_fullscreen = false;
172 }
173 }
174
175 [[nodiscard]] int refresh_rate() {
176 auto monitor = glfwGetPrimaryMonitor();
177 const GLFWvidmode* mode = glfwGetVideoMode(monitor);
178 return mode->refreshRate;
179 }
180
181 void window_close_fire() {
182 glfwSetWindowShouldClose(p_window, GLFW_TRUE);
183 }
184
185 void set_cursor_mode(CursorMode mode) {
186 input_mode.cursor = mode;
187 glfwSetInputMode(p_window, GLFW_CURSOR, int(input_mode.cursor));
188 }
189
190 [[nodiscard]] CursorMode get_cursor_mode() const {
191 return CursorMode(glfwGetInputMode(p_window, GLFW_CURSOR));
192 }
193
194 operator GLFWwindow* () { return p_window; }
195
196 const GLFWvidmode *p_video_mode;
197 GLFWmonitor *p_monitor;
198 GLFWwindow *p_window;
199
200 struct OldData {
201 Extent extent;
202 Extent framebuffer_extent;
203 Extent position;
204 };
205
206 InputMode input_mode;
207 Extent extent;
208 Extent framebuffer_extent;
209 Extent position;
210 OldData old_window_data;
211
212
213 bool is_iconified = false;
214 bool is_visible = true;
215 bool is_focused = false;
216 bool is_fullscreen = false;
217 bool is_damaged = false;
218
219 uint8_t ___reserved[7];
220 Cursor cursor;
221 };
File libs/vkw/include/vkw/instance.h changed (mode: 100644) (index c00aebc..20cf67e)
3 3 #include "result.h" #include "result.h"
4 4 #include "typedefs.h" #include "typedefs.h"
5 5
6 namespace vkw { struct Instance; }
6 namespace vkw {
7 struct Version {
8 Version() = default;
9 constexpr Version(uint32_t major, uint32_t minor, uint32_t patch)
10 : num(VK_MAKE_VERSION(major,minor,patch)) {}
11 uint32_t num;
12 };
13 struct ProductInfo {
14 const char *p_name_str;
15 Version version;
16 };
17 struct VulkanApiVersion {
18 uint32_t num;
19 };
20 constexpr static const VulkanApiVersion VULKAN_API_1_0 = {VK_API_VERSION_1_0};
21 constexpr static const VulkanApiVersion VULKAN_API_1_1 = {VK_API_VERSION_1_1};
22 constexpr static const VulkanApiVersion VULKAN_API_1_2 = {VK_API_VERSION_1_2};
23
24 struct Instance;
25 }
7 26
8 27 struct vkw::Instance : HandleWrapper<VkInstance> { struct vkw::Instance : HandleWrapper<VkInstance> {
9 [[nodiscard]] Result init(Strings layers, Strings extensions) {
28 [[nodiscard]] Result
29 init(Strings layers, Strings extensions, VulkanApiVersion api_version,
30 ProductInfo application, ProductInfo engine) {
10 31 VkApplicationInfo appInfo; { VkApplicationInfo appInfo; {
11 32 appInfo.sType = VK_STRUCTURE_TYPE_APPLICATION_INFO; appInfo.sType = VK_STRUCTURE_TYPE_APPLICATION_INFO;
12 33 appInfo.pNext = nullptr; appInfo.pNext = nullptr;
13 appInfo.pApplicationName = "Game"; //NOTE application info
14 appInfo.applicationVersion = VK_MAKE_VERSION( 1, 0, 0 );
15 appInfo.pEngineName = "";
16 appInfo.engineVersion = VK_MAKE_VERSION( 1, 0, 0 );
17 appInfo.apiVersion = VK_API_VERSION_1_1;
34 appInfo.pApplicationName = application.p_name_str;
35 appInfo.applicationVersion = application.version.num;
36 appInfo.pEngineName = engine.p_name_str;
37 appInfo.engineVersion = engine.version.num;
38 appInfo.apiVersion = api_version.num;
18 39 } }
19 40 VkInstanceCreateInfo info; { VkInstanceCreateInfo info; {
20 41 info.sType = VK_STRUCTURE_TYPE_INSTANCE_CREATE_INFO; info.sType = VK_STRUCTURE_TYPE_INSTANCE_CREATE_INFO;
File src/CMakeLists.txt changed (mode: 100644) (index f460d95..1ceb27c)
1 1 cmake_minimum_required(VERSION 3.5) cmake_minimum_required(VERSION 3.5)
2 2
3 add_library(JEN STATIC
4
3 set(MINIMAL_SOURCES
5 4 instance/debug.cpp instance/debug.cpp
6 5 instance/instance.cpp instance/instance.cpp
7
8 device/allocator/memory.cpp
9 device/allocator/memory_allocator.cpp
10 device/allocator/buffer.cpp
11 device/allocator/buffer_allocator.cpp
6 allocator/memory.cpp
7 allocator/buffer.cpp
12 8 device/device.cpp device/device.cpp
9 gpu_image.cpp
10 descriptors.cpp
11 framework.cpp
12 )
13 13
14 graphics/gpu_transfer/data.cpp
15 graphics/gpu_transfer/queues.cpp
16 graphics/gpu_transfer/gpu_transfer.cpp
17 graphics/draw_stages/clusters.cpp
18 graphics/draw_stages/gpu_image.cpp
19 graphics/draw_stages/attachment.cpp
20 graphics/draw_stages/swap_chain.cpp
21 graphics/draw_stages/draw_stages.cpp
22 graphics/draw_stages/pass_main.cpp
23 graphics/draw_stages/pass_depthcube.cpp
24 graphics/draw_stages/descriptors.cpp
25 graphics/draw_stages/offscreen/offscreen.cpp
26 graphics/draw_stages/composition/composition.cpp
27 graphics/draw_stages/fonts/fonts.cpp
28 graphics/draw_data/text_data/atlas_buffer.cpp
29 graphics/draw_data/text_data/glyphs.cpp
30 graphics/draw_data/text_data/text_data.cpp
31 graphics/draw_data/draw_data.cpp
32 graphics/cmd_data.cpp
33 graphics/jrl.cpp
34 graphics/graphics_interface.cpp
35 graphics/graphics.cpp
36 graphics/debug_overlay.cpp
14 if(JEN_MODULE_GRAPHICS)
15 set(GRAPHICS_SOURCES
16 graphics/gpu_transfer/data.cpp
17 graphics/gpu_transfer/queues.cpp
18 graphics/gpu_transfer/gpu_transfer.cpp
19 graphics/draw_stages/clusters.cpp
20 graphics/draw_stages/attachment.cpp
21 graphics/draw_stages/swap_chain.cpp
22 graphics/draw_stages/draw_stages.cpp
23 graphics/draw_stages/pass_main.cpp
24 graphics/draw_stages/pass_depthcube.cpp
25 graphics/draw_stages/offscreen/offscreen.cpp
26 graphics/draw_stages/composition/composition.cpp
27 graphics/draw_stages/fonts/fonts.cpp
28 graphics/draw_data/text_data/atlas_buffer.cpp
29 graphics/draw_data/text_data/glyphs.cpp
30 graphics/draw_data/text_data/text_data.cpp
31 graphics/draw_data/draw_data.cpp
32 graphics/cmd_data.cpp
33 graphics/graphics_interface.cpp
34 graphics/graphics.cpp
35 graphics/debug_overlay.cpp
36 )
37 else()
38 set(GRAPHICS_SOURCES)
39 endif()
37 40
38 compute/compute.cpp
41 if(JEN_MODULE_COMPUTE)
42 set(COMPUTE_SOURCES
43 compute/compute.cpp
44 compute/cmd_unit.cpp
45 )
46 else()
47 set(COMPUTE_SOURCES)
48 endif()
39 49
40 framework.cpp
41 )
50 if(JEN_MODULE_RESOURCE_MANAGER)
51 set(RESOURCE_MANAGER_SOURCES
52 resource_manager/resource_manager.cpp
53 )
54 else()
55 set(RESOURCE_MANAGER_SOURCES)
56 endif()
42 57
58 add_library(JEN STATIC
59 ${MINIMAL_SOURCES}
60 ${COMPUTE_SOURCES}
61 ${GRAPHICS_SOURCES}
62 ${RESOURCE_MANAGER_SOURCES}
63 )
43 64 target_link_libraries(JEN target_link_libraries(JEN
44 65 SIMDCPP SIMDCPP
45 66 ATLAS ATLAS
File src/allocator/buffer.cpp added (mode: 100644) (index 0000000..ad52ebd)
1 #include <jen/allocator/buffer.h>
2 #include <jlib/threads.h>
3
4 namespace jen {
5 struct DeviceBuffer;
6 struct DeviceBufferPart;
7 };
8
9 struct jen::DeviceBuffer
10 {
11 [[nodiscard]] vkw::Result
12 init(vkw::Device device,
13 const vkw::DeviceMemProps &dmp,
14 vkw::DeviceSize size,
15 vkw::MemPropMask mem_props,
16 vkw::BufferUsageMask buf_usage,
17 bool map);
18
19 void destroy(vkw::Device);
20
21 vkw::Buffer buffer;
22 vkw::Memory memory;
23 vkw::MemPropMask mem_props;
24 uint8_t *p_mapped;
25 };
26
27 struct jen::DeviceBufferAtlas
28 {
29 constexpr static const vkw::DeviceSize MEGABYTE = 1024 * 1024;
30
31 [[nodiscard]] constexpr static vkw::DeviceSize
32 preferred_allocation_size() {
33 return MEGABYTE * 16;
34 }
35
36 [[nodiscard]] vkw::Result
37 init(vkw::Device device,
38 const vkw::DeviceMemProps &dms,
39 vkw::DeviceSize size,
40 vkw::MemPropMask mem_props,
41 vkw::BufferUsageMask buf_usage,
42 bool map,
43 DeviceBufferPart *p_dst);
44
45 void
46 destroy(vkw::Device device);
47
48 [[nodiscard]] bool is_empty() const { return atlas.is_full(); }
49
50 [[nodiscard]] vkw::Result
51 allocate(vkw::Device d, vkw::DeviceSize size, vkw::DeviceSize alignment,
52 bool map, DeviceBufferPart *p_dst);
53
54 void deallocate(const DeviceBufferPart &gba);
55
56 DeviceBuffer buffer;
57 atlas::Atlas1D atlas;
58 };
59
60
61 struct jen::DeviceBufferAllocatorData
62 {
63 struct BuffersUsage {
64 jl::darray<DeviceBufferAtlas> values;
65 vkw::BufferUsageMask usage;
66 };
67 struct BuffersMemUsage {
68 void init() {
69 values.init();
70 lock.init();
71 }
72 void destroy(vkw::Device d) {
73 for (auto &v : values) {
74 for (auto &vv : v.values) {
75 vv.destroy(d);
76 }
77 v.values.destroy();
78 }
79 values.destroy();
80 lock.destroy();
81 }
82
83 jl::darray<BuffersUsage> values;
84 jth::Mutex lock;
85 };
86
87 vkw::Device device;
88 vkw::DeviceMemProps dmp;
89
90 jl::array<BuffersMemUsage, GPU_MEM_USAGE_COUNT> buffers_by_mem_usage;
91 jl::array<bool, GPU_MEM_USAGE_COUNT> mem_usage_supported;
92 };
93
94
95 #include <math/misc.h>
96
97 [[nodiscard]] vkw::Result jen::DeviceBuffer::
98 init(vkw::Device device,
99 const vkw::DeviceMemProps &dmp,
100 vkw::DeviceSize size,
101 vkw::MemPropMask mem_props,
102 vkw::BufferUsageMask buf_usage,
103 bool map)
104 {
105 vkw::Result res;
106 res = buffer.init(device, vkw::Buffer::Mask(), size, buf_usage);
107 if (res != VK_SUCCESS)
108 return res;
109
110 auto rs = buffer.memoryRequirements(device);
111
112 rs.type_mask = vkw::filter_mem_types(dmp, rs.type_mask, mem_props);
113 if (rs.type_mask == 0)
114 return vkw::ERROR_DEVICE_MEMORY_TYPE_NOT_FOUND;
115
116 jl::array<bool, 16> heap_used = {};
117 for (uint32_t i = 0; i < VK_MAX_MEMORY_TYPES; ++i) {
118 if (rs.type_mask & (1<<i)) {
119 auto &heap_i = dmp.memoryTypes[i].heapIndex;
120 if (heap_used[heap_i])
121 continue;
122 heap_used[heap_i] = true;
123 res = memory.allocate(device, rs.size, i);
124 if (res == VK_ERROR_OUT_OF_DEVICE_MEMORY)
125 if (res != VK_SUCCESS)
126 goto DESTROY_BUFFER;
127
128 this->mem_props = dmp.memoryTypes[i].propertyFlags;
129 goto ALLOCATED;
130 }
131 }
132
133 res = vkw::ERROR_DEVICE_MEMORY_TYPE_NOT_FOUND;
134 goto DESTROY_BUFFER;
135
136 ALLOCATED:
137
138
139 res = buffer.bind_memory(device, memory, 0);
140 if (res != VK_SUCCESS)
141 goto FREE_MEMORY;
142
143 if (map and this->mem_props & vkw::MemProp::HOST_VISIBLE) {
144 res = memory.map(device, 0, rs.size, &p_mapped);
145 if (res != VK_SUCCESS)
146 goto FREE_MEMORY;
147 }
148 else p_mapped = nullptr;
149
150 return VK_SUCCESS;
151
152 FREE_MEMORY:
153 memory.deallocate(device);
154 DESTROY_BUFFER:
155 buffer.destroy(device);
156 return res;
157 }
158
159 void jen::DeviceBuffer::
160 destroy(vkw::Device device) {
161 if (p_mapped != nullptr)
162 memory.unmap(device);
163 memory.deallocate(device);
164 buffer.destroy(device);
165 }
166
167 [[nodiscard]] vkw::Result jen::DeviceBufferAtlas::
168 allocate(vkw::Device d, vkw::DeviceSize size, vkw::DeviceSize alignment,
169 bool map, DeviceBufferPart *p_dst)
170 {
171 auto ares = alignment == 0
172 ? atlas.add(size, &p_dst->region)
173 : atlas.add(size, alignment, &p_dst->region);
174 if (ares == atlas::Result::SUCCESS) {
175 if ((buffer.mem_props & vkw::MemProp::HOST_VISIBLE)
176 and map and buffer.p_mapped == nullptr) {
177 vkw::Result res;
178 res = buffer.memory.map(d, 0, atlas.size, &buffer.p_mapped);
179 if (res != VK_SUCCESS) {
180 atlas.remove(p_dst->region);
181 return res;
182 }
183 }
184 p_dst->p_mapped = buffer.p_mapped;
185 p_dst->buffer = buffer.buffer;
186 p_dst->memory = buffer.memory;
187 p_dst->mem_props = buffer.mem_props;
188 return VK_SUCCESS;
189 }
190 if (ares == atlas::Result::ALLOC_ERROR)
191 return VK_ERROR_OUT_OF_HOST_MEMORY;
192
193 jassert(ares == atlas::Result::NO_SIZE, "unexpected atlas result");
194 return VK_ERROR_OUT_OF_DEVICE_MEMORY;
195 }
196
197 void jen::DeviceBufferAtlas::
198 deallocate(const DeviceBufferPart &gba) {
199 atlas.remove(gba.region);
200 }
201
202 [[nodiscard]] bool jen::DeviceBufferAllocator::
203 init(vkw::Device device, const vkw::DeviceMemProps &devmemprops) {
204 if (not jl::allocate(&p))
205 return false;
206 p->device = device;
207 p->dmp = devmemprops;
208 p->mem_usage_supported = {};
209
210 for (uint32_t muse = 0; muse < GPU_MEM_USAGE_COUNT; ++muse) {
211 for (uint32_t i = 0; i < VK_MAX_MEMORY_TYPES; ++i) {
212 auto &mprops = GPU_MEM_USAGE_PROPS[muse];
213 auto filtered = p->dmp.memoryTypes[i].propertyFlags & mprops;
214 if (filtered == mprops) {
215 p->mem_usage_supported[muse] = true;
216 p->buffers_by_mem_usage[muse].init();
217 break;
218 }
219 }
220 }
221 jassert(p->mem_usage_supported[STATIC]
222 and p->mem_usage_supported[STAGING_STATIC_DST],
223 "vulkan specification guarantee");
224 return true;
225 }
226 void jen::DeviceBufferAllocator::
227 destroy() {
228 for (uint32_t muse = 0; muse < GPU_MEM_USAGE_COUNT; ++muse)
229 if (p->mem_usage_supported[muse])
230 p->buffers_by_mem_usage[muse].destroy(p->device);
231 jl::deallocate(&p);
232 }
233
234
235 [[nodiscard]] vkw::Result jen::DeviceBufferAtlas::
236 init(vkw::Device device,
237 const vkw::DeviceMemProps &dmp,
238 vkw::DeviceSize size,
239 vkw::MemPropMask mem_props,
240 vkw::BufferUsageMask buf_usage,
241 bool map,
242 DeviceBufferPart *p_dst)
243 {
244 vkw::DeviceSize preferred_size = preferred_allocation_size();
245
246 vkw::DeviceSize allocation_size;
247 if (size < preferred_size)
248 allocation_size = preferred_size;
249 else
250 allocation_size = math::round_up(size, preferred_size);
251
252 vkw::Result res;
253 res = buffer.init(device, dmp, allocation_size, mem_props, buf_usage, map);
254 if (res != VK_SUCCESS)
255 return res;
256
257 if (not atlas.init(allocation_size, 8)) {
258 res = VK_ERROR_OUT_OF_HOST_MEMORY;
259 goto DESTROY_BUFFER;
260 }
261
262 atlas::Result ares;
263 ares = atlas.add(size, &p_dst->region);
264 jassert(ares == atlas::Result::SUCCESS, "atlas can't fail here");
265
266
267 p_dst->p_mapped = buffer.p_mapped;
268 p_dst->buffer = buffer.buffer;
269 p_dst->memory = buffer.memory;
270 p_dst->mem_props = buffer.mem_props;
271
272 return VK_SUCCESS;
273
274 DESTROY_BUFFER:
275 buffer.destroy(device);
276 return res;
277 }
278
279 void jen::DeviceBufferAtlas::
280 destroy(vkw::Device device) {
281 jassert_soft(is_empty(), "not empty while destroying\n");
282 atlas.destroy();
283 buffer.destroy(device);
284 }
285
286 [[nodiscard]] vkw::Result jen::DeviceBufferAllocator::
287 allocate(vkw::DeviceSize size,
288 vkw::DeviceSize alignment,
289 DevMemUsage mem_usage,
290 vkw::BufferUsageMask buffer_usage_mask,
291 bool map_memory,
292 DeviceBufferPart *p_dst)
293 {
294 jassert(size > 0, "size cannot be 0");
295 vkw::Result res;
296
297 if (not p->mem_usage_supported[mem_usage])
298 mem_usage = STAGING_STATIC_DST;
299 FALLBACK:
300 auto &buffers_by_muse = p->buffers_by_mem_usage[mem_usage];
301 buffers_by_muse.lock.lock();
302 for (auto &buffers_by_use : buffers_by_muse.values) {
303 if (buffers_by_use.usage == buffer_usage_mask) {
304
305 for (auto &buffer : buffers_by_use.values) {
306 res = buffer.allocate(p->device, size, alignment, map_memory, p_dst);
307 if (res == VK_ERROR_OUT_OF_DEVICE_MEMORY)
308 continue;
309 p_dst->mem_use_index = mem_usage;
310 p_dst->buffer_usage = buffer_usage_mask;
311 goto RETURN;
312 }
313 }
314 }
315
316 if (not buffers_by_muse.values.insert_dummy()) {
317 res = VK_ERROR_OUT_OF_HOST_MEMORY;
318 goto RETURN;
319 }
320 {
321 auto &new_usage = buffers_by_muse.values.last();
322 new_usage.usage = buffer_usage_mask;
323 if (not new_usage.values.init(8)) {
324 new_usage.values.init();
325 res = VK_ERROR_OUT_OF_HOST_MEMORY;
326 goto RETURN;
327 }
328
329 {
330 new_usage.values.insert_dummy_no_resize_check();
331 auto &new_buffer = new_usage.values.last();
332 res = new_buffer.init(p->device, p->dmp, size,
333 GPU_MEM_USAGE_PROPS[mem_usage],
334 buffer_usage_mask, map_memory, p_dst);
335 buffers_by_muse.lock.unlock();
336 if (res != VK_SUCCESS) {
337 new_usage.values.remove_last();
338 if (res == VK_ERROR_OUT_OF_DEVICE_MEMORY) {
339 if (mem_usage != STAGING_STATIC_DST) {
340 mem_usage = STAGING_STATIC_DST;
341 goto FALLBACK;
342 }
343 }
344 return res;
345 }
346 p_dst->mem_use_index = mem_usage;
347 p_dst->buffer_usage = buffer_usage_mask;
348 return res;
349 }
350 }
351 RETURN:
352 buffers_by_muse.lock.unlock();
353 return res;
354 }
355
356 void jen::DeviceBufferAllocator::
357 deallocate(const DeviceBufferPart &bp)
358 {
359 jassert(bp.mem_use_index < GPU_MEM_USAGE_COUNT,
360 "corrupted or incorrect buffer allocation");
361 auto &buffers = p->buffers_by_mem_usage[bp.mem_use_index];
362 buffers.lock.lock();
363 {
364 for (auto &bu : buffers.values) {
365 if (bu.usage == bp.buffer_usage) {
366 for (auto &b : bu.values) {
367 if (b.buffer.buffer == bp.buffer) {
368 b.deallocate(bp);
369 buffers.lock.unlock();
370 return;
371 }
372 }
373 }
374 }
375 }
376 buffers.lock.unlock();
377 jassert_soft(false, "failed to find buffer while removing\n");
378 }
File src/allocator/memory.cpp added (mode: 100644) (index 0000000..72dec0a)
1 #include <jen/allocator/memory.h>
2 #include <jlib/threads.h>
3
4 namespace jen {
5 struct DeviceMemory;
6 }
7
8 struct jen::DeviceMemory
9 {
10 constexpr static const vkw::DeviceSize MEGABYTE = 1024 * 1024;
11 constexpr static const uint32_t MAX_ALLOCATIONS_PER_TYPE = 64;
12
13 [[nodiscard]] constexpr static vkw::DeviceSize
14 preferred_allocation_size(vkw::DeviceSize heap_size) {
15 return math::round_up(heap_size / MAX_ALLOCATIONS_PER_TYPE, MEGABYTE);
16 }
17
18
19 [[nodiscard]] vkw::Result
20 init(vkw::Device, const vkw::DeviceMemProps &dmp, vkw::DeviceSize part_size,
21 vkw::MemType mem_type, bool map, DeviceMemoryPart *p_dst);
22
23 [[nodiscard]] vkw::Result map_memory(vkw::Device d) {
24 return memory.map(d, 0, atlas.size, &p_mapped);
25 }
26
27 void destroy(vkw::Device device) {
28 jassert_soft(is_empty(), "not clean while destroying\n");
29 atlas.destroy();
30 if (p_mapped != nullptr)
31 memory.unmap(device);
32 memory.deallocate(device);
33 atlas.size = 0;
34 }
35
36 [[nodiscard]] vkw::Result
37 add(vkw::Device, vkw::DeviceSize size, vkw::DeviceSize alignment, bool map,
38 DeviceMemoryPart *p_dst);
39
40 void remove(const DeviceMemoryPart& part) { atlas.remove(part.part); }
41
42 [[nodiscard]] bool is_empty() { return atlas.is_full(); }
43
44 vkw::Memory memory;
45 atlas::Atlas1D atlas;
46 uint8_t *p_mapped;
47 };
48
49 struct jen::DeviceMemoryAllocatorData
50 {
51 struct LockedMemoryArray {
52 jl::array<DeviceMemory, DeviceMemory::MAX_ALLOCATIONS_PER_TYPE> values;
53 jth::Mutex lock;
54 };
55
56 vkw::Device device;
57 vkw::DeviceMemProps dmp;
58
59 constexpr static const uint8_t MAX_MEMORY_TYPES = VK_MAX_MEMORY_TYPES;
60 jl::array<LockedMemoryArray, MAX_MEMORY_TYPES> mem_types;
61 };
62
63 #include <math/misc.h>
64
65 [[nodiscard]] vkw::Result jen::DeviceMemory::
66 init(vkw::Device dev, const vkw::DeviceMemProps &dmp, vkw::DeviceSize part_size,
67 vkw::MemType mem_type, bool map, DeviceMemoryPart *p_dst)
68 {
69 vkw::DeviceSize heapsize;
70 heapsize = dmp.memoryHeaps[dmp.memoryTypes[mem_type].heapIndex].size;
71 vkw::DeviceSize preferred_size = preferred_allocation_size(heapsize);
72
73 vkw::DeviceSize allocation_size;
74 if (part_size < preferred_size)
75 allocation_size = preferred_size;
76 else
77 allocation_size = math::round_up(part_size, preferred_size);
78
79 // TODO reduce allocation size if out of device memory
80 vkw::Result res;
81 res = memory.allocate(dev, allocation_size, mem_type);
82 if (res != VK_SUCCESS)
83 return res;
84
85 if (not atlas.init(allocation_size, 4)) {
86 res = VK_ERROR_OUT_OF_HOST_MEMORY;
87 goto CANCEL;
88 }
89
90 if (map) {
91 res = map_memory(dev);
92 if (res != VK_SUCCESS)
93 goto CANCEL;
94 }
95 else p_mapped = nullptr;
96
97 p_dst->memory = memory;
98 p_dst->p_mapped = p_mapped;
99
100 atlas::Result ares;
101 ares = atlas.add(part_size, &p_dst->part);
102 jassert(ares == atlas::Result::SUCCESS, "atlas can't fail here");
103
104 return VK_SUCCESS;
105
106 CANCEL:
107 memory.deallocate(dev);
108 return res;
109 }
110
111 [[nodiscard]] vkw::Result jen::DeviceMemory::
112 add(vkw::Device dev, vkw::DeviceSize size, vkw::DeviceSize alignment, bool map,
113 DeviceMemoryPart *p_dst)
114 {
115 auto ares = atlas.add(size, alignment, &p_dst->part);
116 if (ares == atlas::Result::SUCCESS) {
117 p_dst->memory = memory;
118 p_dst->p_mapped = p_mapped + p_dst->part.offset;
119
120 if (map) {
121 if (p_mapped == nullptr) {
122 vkw::Result res = map_memory(dev);
123 if (res != VK_SUCCESS) {
124 atlas.remove(p_dst->part);
125 return res;
126 }
127 }
128 p_dst->p_mapped = p_mapped + p_dst->part.offset;
129 }
130 else
131 p_dst->p_mapped = nullptr;
132 return VK_SUCCESS;
133 }
134 if (ares == atlas::Result::NO_SIZE)
135 return VK_ERROR_OUT_OF_DEVICE_MEMORY;
136
137 jassert(ares == atlas::Result::ALLOC_ERROR, "unexpected atlas result");
138 return VK_ERROR_OUT_OF_HOST_MEMORY;
139 }
140
141 [[nodiscard]] bool jen::DeviceMemoryAllocator::
142 init(vkw::Device d, const vkw::DeviceMemProps &dmp) {
143 if (not jl::allocate(&p))
144 return false;
145 for (auto &mt : p->mem_types) {
146 for (auto &m : mt.values)
147 m.atlas.size = 0;
148 mt.lock.init();
149 }
150 p->device = d;
151 p->dmp = dmp;
152 return true;
153 }
154 void jen::DeviceMemoryAllocator::
155 destroy() {
156 for (auto &mt : p->mem_types) {
157 mt.lock.destroy();
158 for (auto &m : mt.values)
159 if (m.atlas.size != 0)
160 m.destroy(p->device);
161 }
162 jl::deallocate(&p);
163 }
164
165 [[nodiscard]] vkw::Result jen::DeviceMemoryAllocator::
166 map_memory(DeviceMemoryPart *p_part) {
167 auto &type = p->mem_types[p_part->type];
168 type.lock.lock();
169 auto &m = type.values[p_part->allocator_index];
170 vkw::Result res = VK_SUCCESS;
171 if (m.p_mapped == nullptr)
172 res = m.map_memory(p->device);
173 p_part->p_mapped = m.p_mapped;
174 type.lock.unlock();
175 return res;
176 }
177
178 void jen::DeviceMemoryAllocator::
179 deallocate(const DeviceMemoryPart &part) {
180 auto &type = p->mem_types[part.type];
181 type.lock.lock();
182 auto &m = type.values[part.allocator_index];
183 m.remove(part);
184 if (m.is_empty())
185 m.destroy(p->device);
186 type.lock.unlock();
187 }
188
189 [[nodiscard]] vkw::Result jen::DeviceMemoryAllocator::
190 allocate(const vkw::MemReqs &mrs, bool map, DeviceMemoryPart *p_dst)
191 {
192 for (uint32_t mtype = 0; mtype < p->MAX_MEMORY_TYPES; ++mtype) {
193 uint32_t type_bit = 1 << mtype;
194 if (not (mrs.type_mask & type_bit))
195 continue;
196
197 vkw::Result res;
198
199 auto &type = p->mem_types[mtype];
200 type.lock.lock();
201
202 uint32_t first_nonallocated = uint32_t(-1);
203 for (uint32_t i = 0; i < type.values.count(); ++i) {
204 auto &m = type.values[i];
205 if (m.atlas.size == 0) {
206 if (first_nonallocated == uint32_t(-1))
207 first_nonallocated = i;
208 }
209 else if (not m.is_empty()) {
210 res = m.add(p->device, mrs.size, mrs.alignment, map, p_dst);
211 if (res == VK_ERROR_OUT_OF_DEVICE_MEMORY)
212 continue;
213 p_dst->allocator_index = i;
214 goto RETURN;
215 }
216 }
217
218 if (first_nonallocated == uint32_t(-1))
219 goto CONTINUE;
220
221 res = type.values[first_nonallocated]
222 .init(p->device, p->dmp, mrs.size, mtype, map, p_dst);
223 p_dst->allocator_index = first_nonallocated;
224 if (res == VK_SUCCESS or res != VK_ERROR_OUT_OF_DEVICE_MEMORY)
225 goto RETURN;
226
227 CONTINUE:
228 type.lock.unlock();
229 continue;
230 RETURN:
231 if (res == VK_SUCCESS) {
232 p_dst->type = mtype;
233 }
234 type.lock.unlock();
235 return res;
236 }
237 return VK_ERROR_OUT_OF_DEVICE_MEMORY;
238 }
File src/compute/binding_set.h deleted (index 54a5ebf..0000000)
1 #pragma once
2
3 #include "../graphics/draw_stages/descriptors.h"
4 #include "bindings.h"
5
6 namespace jen::compute
7 {
8 struct Bindings {
9 jl::rarray<const BindingBufferView> uniform_texel_buffer;
10 jl::rarray<const BindingBufferView> storage_texel_buffer;
11 jl::rarray<const BindingBuffer> uniform_buffer;
12 jl::rarray<const BindingBuffer> storage_buffer;
13 jl::rarray<const BindingImage> storage_image;
14 };
15
16 struct BindingsSet
17 {
18 [[nodiscard]] Result
19 init(vk::Device *p_dev, vkw::DescrLayout setLayout, const Bindings &bi)
20 {
21 uint32_t numPoolPart = 0;
22 uint32_t numSets = 0;
23 jl::array<vkw::DescrPoolPart,4> pool_parts;
24 auto put_part = [&numSets, &numPoolPart, &pool_parts]
25 (vkw::DescrType dt, auto part) {
26 if (part.count32() > 0) {
27 pool_parts[numPoolPart].type = dt;
28 numSets += pool_parts[numPoolPart].count = part.count32();
29 ++numPoolPart;
30 }
31 };
32 put_part(vkw::DescrType::UNIFORM_TEXEL_BUFFER, bi.uniform_texel_buffer);
33 put_part(vkw::DescrType::STORAGE_TEXEL_BUFFER, bi.storage_texel_buffer);
34 put_part(vkw::DescrType::UNIFORM_BUFFER, bi.uniform_buffer);
35 put_part(vkw::DescrType::STORAGE_BUFFER, bi.storage_buffer);
36 put_part(vkw::DescrType::STORAGE_IMAGE, bi.storage_image);
37
38 Result res;
39 res = pool.init(*p_dev, {}, {pool_parts.begin(), numPoolPart}, numSets);
40 if (res != VK_SUCCESS)
41 return res;
42
43 res = pool.allocate_set(p_dev->device, setLayout, &set);
44 if (res != VK_SUCCESS) {
45 pool.destroy(p_dev->device);
46 return res;
47 }
48
49 auto &set_ = set;
50 auto set_views = [&set_, p_dev] (vkw::DescrType dt, auto sets) {
51 for (auto &b : sets)
52 set_.set(*p_dev, b.binding, dt, b.view);
53 };
54 set_views(vkw::DescrType::UNIFORM_TEXEL_BUFFER, bi.uniform_texel_buffer);
55 set_views(vkw::DescrType::STORAGE_TEXEL_BUFFER, bi.storage_texel_buffer);
56
57 auto set_buffers = [&set_, p_dev] (vkw::DescrType dt, auto sets) {
58 for (auto &b : sets)
59 set_.set(*p_dev, b.binding, dt, b.part.range());
60 };
61 set_buffers(vkw::DescrType::UNIFORM_BUFFER, bi.uniform_buffer);
62 set_buffers(vkw::DescrType::STORAGE_BUFFER, bi.storage_buffer);
63
64 for (auto &b : bi.storage_image) {
65 vkw::DescrImage des;
66 des.sampler = {};
67 des.imageView = b.p_image->image.view;
68 des.imageLayout = vkw::ImLayout::GENERAL;
69 set_.set(*p_dev, b.binding, vkw::DescrType::STORAGE_IMAGE, des);
70 }
71
72 return res;
73 }
74 void destroy(vk::Device *p_dev) {
75 pool.destroy(*p_dev);
76 }
77
78 vkw::DescrSet set;
79 vkw::DescrPool pool;
80 };
81 }
File src/compute/bindings.h deleted (index d6f02fc..0000000)
1 #pragma once
2
3 #include "../device/device.h"
4 #include "../graphics/draw_stages/gpu_image.h"
5
6 namespace jen::compute {
7 namespace BindingUseFlag { enum {
8 TRANSFER_SRC = vkw::BufferUsage::TRANSFER_SRC,
9 TRANSFER_DST = vkw::BufferUsage::TRANSFER_DST,
10 UNIFORM_TEXEL = vkw::BufferUsage::UNIFORM_TEXEL,
11 STORAGE_TEXEL = vkw::BufferUsage::STORAGE_TEXEL,
12 UNIFORM = vkw::BufferUsage::UNIFORM,
13 STORAGE = vkw::BufferUsage::STORAGE,
14 }; }
15 using BindingUseMask = uint32_t;
16
17 struct BindingCreateInfo {
18 vkw::DeviceSize size;
19 BindingUseMask use;
20 vkw::BindNo bindingNo;
21 };
22 using BindingCreateInfos = jl::rarray<const BindingCreateInfo>;
23
24 struct BindingBuffer {
25 [[nodiscard]] Result
26 init(vk::Device *p_d, const BindingCreateInfo &info) {
27 vk::DevMemUsage mem_use;
28 bool map = info.use & BindingUseFlag::TRANSFER_DST
29 or
30 info.use & BindingUseFlag::TRANSFER_SRC;
31
32 if (info.use & BindingUseFlag::UNIFORM_TEXEL
33 or
34 info.use & BindingUseFlag::UNIFORM)
35 mem_use = vk::DevMemUsage::DYNAMIC_DST;
36 else
37 mem_use = vk::DevMemUsage::STATIC;
38
39 binding = info.bindingNo;
40
41 Result res;
42 res = p_d->buffer_allocator
43 .allocate(info.size, 0, mem_use, info.use, map, &part);
44 if (res != VK_SUCCESS)
45 return res;
46 use_staging = map and not part.is_mapped();
47 if (use_staging) {
48 if (info.use & BindingUseFlag::TRANSFER_DST)
49 mem_use = vk::DevMemUsage::STAGING_STATIC_DST;
50 else
51 mem_use = vk::DevMemUsage::STAGING_SRC;
52
53 vkw::BufferUsageMask usage = info.use
54 | BindingUseFlag::TRANSFER_DST
55 | BindingUseFlag::TRANSFER_SRC;
56 res = p_d->buffer_allocator
57 .allocate(info.size, 0, mem_use, usage, true, &staging);
58 if (res != VK_SUCCESS)
59 p_d->buffer_allocator.deallocate(part);
60 }
61 return res;
62 }
63 void destroy(vk::Device *p_d) {
64 if (use_staging)
65 p_d->buffer_allocator.deallocate(staging);
66 p_d->buffer_allocator.deallocate(part);
67 }
68
69 vk::DeviceBufferPart part;
70 vkw::BindNo binding;
71 bool use_staging;
72 vk::DeviceBufferPart staging;
73 };
74
75 struct BindingBufferView : BindingBuffer {
76 [[nodiscard]] Result
77 init(vk::Device *p_d, const BindingCreateInfo &info, VkFormat format) {
78 Result res;
79 res = BindingBuffer::init(p_d, info);
80 if (res != VK_SUCCESS)
81 return res;
82 res = view.init(*p_d, part.buffer, format, part.offset(), part.size());
83 if (res != VK_SUCCESS)
84 p_d->buffer_allocator.deallocate(part);
85 return res;
86 }
87 void destroy(vk::Device *p_d) {
88 view.destroy(*p_d);
89 p_d->buffer_allocator.deallocate(part);
90 }
91 vkw::BufferView view;
92 };
93
94
95 namespace ImageUseFlag { enum {
96 TRANSFER_SRC = vkw::ImUsage::TRANSFER_SRC,
97 TRANSFER_DST = vkw::ImUsage::TRANSFER_DST,
98 STORAGE = vkw::ImUsage::STORAGE,
99 }; }
100 using ImageUseMask = uint32_t;
101
102 struct ImageCreateInfo {
103 math::v3u32 extent;
104 uint32_t layer_count;
105 uint32_t mip_level_count;
106 VkFormat format;
107 vkw::ImType type;
108 vkw::Samples samples;
109 ImageUseMask usage;
110 };
111 using ImageCreateInfos = jl::rarray<const ImageCreateInfo>;
112
113 struct Image {
114 [[nodiscard]] Result
115 init(vk::Device *p_d, const ImageCreateInfo &info) {
116 vk::ImageInfo ii; {
117 ii.extent = {info.extent.x, info.extent.y, info.extent.z};
118 ii.layer_count = info.layer_count;
119 ii.mip_level_count = info.mip_level_count;
120 ii.format = info.format;
121 ii.type = info.type;
122 ii.samples = info.samples;
123 ii.usage = info.usage;
124 ii.flags = {};
125 ii.tiling = vkw::Tiling::OPTIMAL;
126 }
127 vk::ViewInfo vi; {
128 vi.type = vkw::ImViewType(info.type);
129 vi.aspect = vkw::ImAspect::COLOR;
130 }
131 Result res = image.init(p_d, &ii, &vi);
132 if (res != VK_SUCCESS)
133 return res;
134
135 VkDeviceSize size = vkw::format_size(ii.format) * ii.extent.volume();
136 size *= ii.layer_count;
137 vk::DevMemUsage mem_use = vk::DevMemUsage::STAGING_STATIC_DST;
138
139 res = p_d->buffer_allocator
140 .allocate(size, 0, mem_use, vkw::BufferUsage::TRANSFER_SRC
141 | vkw::BufferUsage::TRANSFER_DST, true, &staging);
142 if (res != VK_SUCCESS)
143 image.destroy(p_d);
144
145 format = info.format;
146 layout = vkw::ImLayout::UNDEFINED;
147 mip_level_count = info.mip_level_count;
148 layer_count = info.layer_count;
149 return res;
150 }
151 void destroy(vk::Device *p_d) {
152 p_d->buffer_allocator.deallocate(staging);
153 image.destroy(p_d);
154 }
155
156
157 void
158 transitionLayout(vkw::CmdBuffer *p_cmd,
159 vkw::ImLayout layout, vkw::StageMaskChange stages) {
160 vkw::BarrierImMem barrier; {
161 barrier.access_change.src = vkw::AccessMask();
162 barrier.access_change.dst = vkw::AccessMask();
163 barrier.layout_change.src = this->layout;
164 barrier.layout_change.dst = layout;
165 barrier.queueFamily_change.set_both(VK_QUEUE_FAMILY_IGNORED);
166 barrier.image = image.image;
167 barrier.range.mip_levels_offset = 0;
168 barrier.range.mip_levels_count = mip_level_count;
169 barrier.range.layers_offset = 0;
170 barrier.range.layers_count = layer_count;
171 barrier.range.aspect = vkw::ImAspect::COLOR;
172 }
173 p_cmd->cmd_barriers(stages, {}, {}, barrier);
174 }
175
176 using GpuImage = vk::GpuImage<vk::GpuImageMode::VIEW>;
177 GpuImage image;
178 VkFormat format;
179 vkw::ImLayout layout;
180 vk::DeviceBufferPart staging;
181 uint32_t mip_level_count;
182 uint32_t layer_count;
183 };
184
185 struct BindingImage {
186 void init(Image *p_image, vkw::BindNo bindingNo) {
187 this->binding = bindingNo;
188 this->p_image = p_image;
189 }
190
191 Image *p_image;
192 vkw::BindNo binding;
193 };
194 }
File src/compute/cmd_unit.cpp added (mode: 100644) (index 0000000..8bb44ee)
1 #include <jen/compute.h>
2 #include "../device/device.h"
3
4 using namespace jen;
5 using namespace jen::compute;
6
7 void
8 transitionLayout(Image *p, vkw::CmdBuffer *p_cmd,
9 vkw::ImLayout layout, vkw::StageMaskChange stages) {
10 vkw::BarrierImMem barrier; {
11 barrier.access_change.src = vkw::AccessMask();
12 barrier.access_change.dst = vkw::AccessMask();
13 barrier.layout_change.src = p->layout;
14 barrier.layout_change.dst = layout;
15 barrier.queueFamily_change.set_both(VK_QUEUE_FAMILY_IGNORED);
16 barrier.image = p->image.image;
17 barrier.range.mip_levels_offset = 0;
18 barrier.range.mip_levels_count = p->mip_level_count;
19 barrier.range.layers_offset = 0;
20 barrier.range.layers_count = p->layer_count;
21 barrier.range.aspect = vkw::ImAspect::COLOR;
22 }
23 p_cmd->cmd_barriers(stages, {}, {}, barrier);
24 }
25
26 void check_transfer(const jen::DeviceBufferPart &part,
27 vkw::DeviceSize offset, vkw::DeviceSize size) {
28 jassert(offset + size <= part.size(), "region exceeds buffer");
29 jassert(part.is_mapped(), "cannot access memory");
30 jassert(not part.is_flush_needed(), "flush not supported");
31 }
32 void
33 write_to_allocation(void *p_src, jen::DeviceBufferPart *p_dst,
34 vkw::DeviceSize dst_offset, vkw::DeviceSize size) {
35 check_transfer(*p_dst, dst_offset, size);
36 memcpy(p_dst->p_data() + dst_offset, p_src, size);
37 }
38
39 void
40 read_from_allocation(jen::DeviceBufferPart *p_src, void *p_dst,
41 vkw::DeviceSize src_offset, vkw::DeviceSize size) {
42 check_transfer(*p_src, src_offset, size);
43 memcpy(p_dst, p_src->p_data() + src_offset, size);
44 }
45
46
47 struct jen::ComputeCmdUnitData {
48 [[nodiscard]] Result init(Device *p_dev) {
49 this->p_dev = p_dev;
50 Result res;
51 res = compute_cmds
52 .init(*p_dev, p_dev->queue_indices.compute.family,
53 vkw::CmdPoolFlag::MANUAL_CMD_RESET);
54 if (res != VK_SUCCESS)
55 return res;
56
57 res = transfer_cmds
58 .init(*p_dev, p_dev->queue_indices.transfer.family,
59 vkw::CmdPoolFlag::MANUAL_CMD_RESET);
60 if (res != VK_SUCCESS)
61 goto CCC;
62
63 res = syncs.init(*p_dev);
64 if (res != VK_SUCCESS)
65 goto CTC;
66
67 wait_transfer_write = wait_transfer_read = wait_compute = false;
68 return VK_SUCCESS;
69
70 CTC:
71 transfer_cmds.destroy(*p_dev);
72 CCC:
73 compute_cmds.destroy(*p_dev);
74 return res;
75 }
76 void destroy() {
77 transfer_cmds.destroy(*p_dev);
78 compute_cmds.destroy(*p_dev);
79 syncs.destroy(*p_dev);
80 }
81
82 [[nodiscard]] jen::Result
83 wait() {
84 jen::Result res;
85 if (wait_compute) {
86 res = syncs.fences[0].wait_and_reset(*p_dev, vkw::TIMEOUT_INFINITE);
87 if (res != VK_SUCCESS)
88 return res;
89 wait_compute = false;
90 }
91 if (wait_transfer_read) {
92 res = syncs.fences[1].wait_and_reset(*p_dev, vkw::TIMEOUT_INFINITE);
93 if (res != VK_SUCCESS)
94 return res;
95 wait_transfer_read = false;
96 }
97 return VK_SUCCESS;
98 }
99
100 [[nodiscard]] jen::Result
101 proceed_writes(BufferTransfers buffer_writes,
102 ImagesTransfers images_writes)
103 {
104 auto &cmd = transfer_cmds.primary[0];
105
106 auto begin = [&cmd, this]() -> jen::Result {
107 if (not wait_transfer_write) {
108 jen::Result res;
109 res = cmd.begin(vkw::CmdUsage::ONE_TIME_SUBMIT);
110 if (res != VK_SUCCESS)
111 return res;
112 wait_transfer_write = true;
113 }
114 return VK_SUCCESS;
115 };
116
117 for (uint32_t i = 0; i < buffer_writes.count(); ++i) {
118 auto &write = buffer_writes[i];
119 auto &buffer = *write.p_buffer;
120
121 jen::DeviceBufferPart *p_part;
122 if (buffer.use_staging)
123 p_part = &buffer.staging;
124 else
125 p_part = &buffer.part;
126
127 write_to_allocation(write.p_data, p_part, write.offset, write.size);
128
129 if (buffer.use_staging) {
130 vkw::BufferChange bs;
131 bs.src = buffer.staging.buffer;
132 bs.dst = buffer.part.buffer;
133 vkw::BufferRegion region;
134 region.offsets.src = buffer.staging.offset();
135 region.offsets.dst = buffer.part.offset();
136 region.size = write.size;
137 auto res = begin();
138 if (res != VK_SUCCESS)
139 return res;
140 cmd.cmd_cp_buffer(bs, region);
141 }
142 }
143
144 for (uint32_t i = 0; i < images_writes.count(); ++i) {
145 auto res = begin();
146 if (res != VK_SUCCESS)
147 return res;
148
149 auto &w = images_writes[i];
150 auto &im = *w.p_image;
151
152 if (im.layout != vkw::ImLayout::TRANSFER_DST) {
153 vkw::StageMaskChange stages;
154 stages.src = vkw::StageFlag::TOP_OF_PIPE;
155 stages.dst = vkw::StageFlag::TRANSFER;
156 transitionLayout(&im, &cmd, vkw::ImLayout::TRANSFER_DST, stages);
157 }
158
159 vkw::DeviceSize offset = 0;
160 for (auto &r : w.transfers) {
161 auto size = r.extent.volume() * vkw::format_size(im.format)
162 * r.layer_count;
163 write_to_allocation(r.p_data, &im.staging, offset, size);
164
165 vkw::BufferAndImageRegion region; {
166 region.bufferOffset = im.staging.offset() + offset;
167 region.bufferRowLength = region.bufferImageHeight = 0;
168 region.imageSubresource = {
169 vkw::ImAspect::COLOR,
170 r.mip_level,
171 r.layer_offset,
172 r.layer_count
173 };
174 region.imageOffset.x = int32_t(r.offset.x);
175 region.imageOffset.y = int32_t(r.offset.y);
176 region.imageOffset.z = int32_t(r.offset.z);
177 region.imageExtent.width = r.extent.x;
178 region.imageExtent.height = r.extent.y;
179 region.imageExtent.depth = r.extent.z;
180 }
181 cmd.cmd_cp_buffer_to_image({im.staging.buffer, im.image.image},
182 region, vkw::ImLayout::TRANSFER_DST);
183
184 offset += size;
185 }
186 }
187
188 if (wait_transfer_write) {
189 jen::Result res;
190 res = cmd.end();
191 if (res != VK_SUCCESS)
192 return res;
193 vkw::QueueSignal signal(syncs.semaphores[0].p_vk);
194 vkw::QueueSubmit submit(cmd, {}, signal);
195 res = p_dev->queues.transfer.submit_locked(submit);
196 if (res != VK_SUCCESS)
197 return res;
198
199 for (uint32_t i = 0; i < images_writes.count(); ++i)
200 images_writes[i].p_image->layout = vkw::ImLayout::TRANSFER_DST;
201 }
202
203 return VK_SUCCESS;
204 }
205
206 [[nodiscard]] Result
207 proceed_staging_reads(BufferTransfers buffer_reads,
208 ImagesTransfers images_reads)
209 {
210 auto &cmd = transfer_cmds.primary[1];
211 auto begin = [&cmd, this]() -> jen::Result {
212 if (not wait_transfer_read) {
213 jen::Result res;
214 res = cmd.begin(vkw::CmdUsage::ONE_TIME_SUBMIT);
215 if (res != VK_SUCCESS)
216 return res;
217 wait_transfer_read = true;
218 }
219 return VK_SUCCESS;
220 };
221
222 for (uint32_t i = 0; i < buffer_reads.count(); ++i) {
223 auto &read = buffer_reads[i];
224 auto &buffer = *read.p_buffer;
225
226 if (buffer.use_staging) {
227 vkw::BufferChange bs;
228 bs.src = buffer.part.buffer;
229 bs.dst = buffer.staging.buffer;
230 vkw::BufferRegion region;
231 region.offsets.src = buffer.part.offset();
232 region.offsets.dst = buffer.staging.offset();
233 region.size = read.size;
234 auto res = begin();
235 if (res != VK_SUCCESS)
236 return res;
237 cmd.cmd_cp_buffer(bs, region);
238 }
239 }
240
241 for (uint32_t i = 0; i < images_reads.count(); ++i) {
242 auto res = begin();
243 if (res != VK_SUCCESS)
244 return res;
245
246 auto &w = images_reads[i];
247 auto &im = *w.p_image;
248
249 if (im.layout != vkw::ImLayout::TRANSFER_SRC) {
250 vkw::StageMaskChange stages;
251 stages.src = vkw::StageFlag::TOP_OF_PIPE;
252 stages.dst = vkw::StageFlag::TRANSFER;
253 transitionLayout(&im, &cmd, vkw::ImLayout::TRANSFER_SRC, stages);
254 }
255
256 vkw::DeviceSize offset = 0;
257 for (auto &r : w.transfers) {
258 vkw::BufferAndImageRegion region; {
259 region.bufferOffset = im.staging.offset() + offset;
260 region.bufferRowLength = region.bufferImageHeight = 0;
261 region.imageSubresource = {
262 vkw::ImAspect::COLOR,
263 r.mip_level,
264 r.layer_offset,
265 r.layer_count
266 };
267 region.imageOffset.x = int32_t(r.offset.x);
268 region.imageOffset.y = int32_t(r.offset.y);
269 region.imageOffset.z = int32_t(r.offset.z);
270 region.imageExtent.width = r.extent.x;
271 region.imageExtent.height = r.extent.y;
272 region.imageExtent.depth = r.extent.z;
273 }
274 cmd.cmd_cp_image_to_buffer({im.image.image, im.staging.buffer},
275 region, vkw::ImLayout::TRANSFER_SRC);
276
277 offset += r.extent.volume() * vkw::format_size(im.format) * r.layer_count;
278 }
279 }
280
281 if (wait_transfer_read) {
282 wait_compute = false;
283 jen::Result res;
284 res = transfer_cmds.primary[1].end();
285 if (res != VK_SUCCESS)
286 return res;
287 vkw::QueueWait wait;
288 wait.semaphores = syncs.semaphores[1].p_vk;
289 wait.stage_masks = vkw::StageFlag::COMPUTE_SHADER;
290 vkw::QueueSubmit submit(cmd, wait);
291 res = p_dev->queues.transfer.submit_locked(submit, syncs.fences[1]);
292 if (res != VK_SUCCESS)
293 return res;
294
295 for (uint32_t i = 0; i < images_reads.count(); ++i)
296 images_reads[i].p_image->layout = vkw::ImLayout::TRANSFER_SRC;
297 }
298
299 return VK_SUCCESS;
300 }
301
302
303 struct SyncCounts : vk::SyncContainerCounts {
304 constexpr static const uint32_t FENCES = 2;
305 constexpr static const uint32_t SEMAPHORES = 2;
306 };
307
308 Device *p_dev;
309 vk::CmdPoolContainer<1, 0> compute_cmds;
310 vk::CmdPoolContainer<2, 0> transfer_cmds;
311 vk::SyncContainer<SyncCounts> syncs;
312 bool wait_transfer_write;
313 bool wait_compute;
314 bool wait_transfer_read;
315 };
316
317 [[nodiscard]] Result ComputeCmdUnit::
318 init(ModuleCompute mc) {
319 if (not jl::allocate(&p))
320 return VK_ERROR_OUT_OF_HOST_MEMORY;
321 Result res = p->init(mc.p_device);
322 if (res != VK_SUCCESS)
323 jl::deallocate(&p);
324 return res;
325 }
326 void ComputeCmdUnit::destroy() {
327 p->destroy();
328 jl::deallocate(&p);
329 }
330
331
332 [[nodiscard]] Result
333 check_computeInfo(const Device &device,
334 const ComputeInfo &info) {
335 for (int i = 0; i < 3; ++i)
336 if (info.group_count[i] >
337 device.properties.limits.maxComputeWorkGroupCount[i]) {
338 fprintf(stderr, "ComputeInfo.group_count[%i] "
339 "exceeds device limit maxComputeWorkGroupCount[%i]="
340 "%i. "
341 "Max of 65535 is recommended because it is "
342 "minimum possible supported value\n", i, i,
343 device.properties.limits.maxComputeWorkGroupCount[i]);
344 return vkw::ERROR_INVALID_USAGE;
345 }
346 return VK_SUCCESS;
347 }
348
349 [[nodiscard]] Result ComputeCmdUnit::
350 compute_status() {
351 jen::Result res;
352 if (p->wait_compute) {
353 res = p->syncs.fences[0].status(*p->p_dev);
354 if (res != VK_SUCCESS)
355 return res;
356 }
357 if (p->wait_transfer_read) {
358 res = p->syncs.fences[1].status(*p->p_dev);
359 if (res != VK_SUCCESS)
360 return res;
361 }
362 return VK_SUCCESS;
363 }
364
365 [[nodiscard]] Result ComputeCmdUnit::
366 compute(const ComputeInfo &info)
367 {
368 Result res;
369 res = check_computeInfo(*p->p_dev, info);
370 if (res != VK_SUCCESS)
371 return res;
372
373 res = p->wait();
374 if (res != VK_SUCCESS)
375 return res;
376
377 res = p->proceed_writes(info.buffer_writes, info.images_writes);
378 if (res != VK_SUCCESS)
379 return res;
380
381 auto &syncs = p->syncs;
382 auto &cmds = p->compute_cmds;
383 auto &pipeline = info.p_pipeline->pipeline;
384 auto &pipelineLayout = info.p_pipeline->layout;
385 auto &set = info.p_bindingsSet->set;
386
387 auto &cmd = cmds.primary[0];
388 res = cmd.begin(vkw::CmdUsage::ONE_TIME_SUBMIT);
389 if (res != VK_SUCCESS)
390 return res;
391
392 for (auto &im : info.p_bindings->storage_image) {
393 auto l = vkw::ImLayout::GENERAL;
394 if (im.p_image->layout == l)
395 continue;
396 vkw::StageMaskChange stages;
397 stages.src = vkw::StageFlag::TOP_OF_PIPE;
398 stages.dst = vkw::StageFlag::COMPUTE_SHADER;
399 transitionLayout(im.p_image, &cmd, l, stages);
400 }
401
402 cmd.cmd_set_pipeline(pipeline, vkw::BindPoint::COMPUTE);
403
404 cmd.cmd_set_descr_sets(vkw::BindPoint::COMPUTE, pipelineLayout, set, 0);
405 cmd.cmd_dispatch(*reinterpret_cast<const vkw::Vector3D*>(&info.group_count));
406
407 res = cmd.end();
408 if (res != VK_SUCCESS)
409 return res;
410
411 bool use_read_semaphore = false;
412 if (info.images_reads.count() > 0)
413 use_read_semaphore = true;
414 else for (uint32_t i = 0; i < info.buffer_reads.count(); ++i) {
415 if (info.buffer_reads[i].p_buffer->use_staging) {
416 use_read_semaphore = true;
417 break;
418 }
419 }
420
421 vkw::QueueWait wait;
422 if (p->wait_transfer_write) {
423 wait.semaphores = syncs.semaphores[0].p_vk;
424 wait.stage_masks = vkw::StageFlag::TRANSFER;
425 }
426 else
427 wait = {};
428 vkw::QueueSignal signal;
429 if (use_read_semaphore)
430 signal = syncs.semaphores[1].p_vk;
431 else
432 signal = {};
433 vkw::QueueSubmit submit(cmd, wait, signal);
434
435 res = p->p_dev->queues.compute.submit_locked(submit, syncs.fences[0]);
436 if (res != VK_SUCCESS)
437 return res;
438
439 for (auto &im : info.p_bindings->storage_image) {
440 auto l = vkw::ImLayout::GENERAL;
441 im.p_image->layout = l;
442 }
443
444 p->wait_compute = true;
445
446 return p->proceed_staging_reads(info.buffer_reads, info.images_reads);
447 }
448
449
450 [[nodiscard]] Result ComputeCmdUnit::
451 read_result(BufferTransfers buffer_reads, ImagesTransfers images_reads) {
452 Result res;
453 res = p->wait();
454 if (res != VK_SUCCESS)
455 return res;
456
457 for (uint32_t i = 0; i < buffer_reads.count(); ++i) {
458 auto &read = buffer_reads[i];
459 auto &buffer = *read.p_buffer;
460
461 jen::DeviceBufferPart *p_part;
462 if (buffer.use_staging)
463 p_part = &buffer.staging;
464 else
465 p_part = &buffer.part;
466
467 read_from_allocation(p_part, read.p_data, read.offset, read.size);
468 }
469
470 for (uint32_t i = 0; i < images_reads.count(); ++i) {
471 auto &read = images_reads[i];
472 auto &im = *read.p_image;
473 auto p_part = &im.staging;
474
475 vkw::DeviceSize offset = 0;
476 for (auto &r : read.transfers) {
477 auto size = r.extent.volume() * vkw::format_size(im.format)
478 * r.layer_count;
479 read_from_allocation(p_part, r.p_data, offset, size);
480 offset += size;
481 }
482 }
483 return VK_SUCCESS;
484 }
File src/compute/cmd_unit.h deleted (index de79201..0000000)
1 #pragma once
2
3 #include "../device/cmd_container.h"
4
5 namespace jen::compute
6 {
7 struct CmdUnit {
8 [[nodiscard]] Result init(vk::Device *p_dev) {
9 Result res;
10 res = compute_cmds.init(*p_dev, p_dev->queue_indices.compute.family,
11 vkw::CmdPoolFlag::MANUAL_CMD_RESET);
12 if (res != VK_SUCCESS)
13 return res;
14
15 res = transfer_cmds.init(*p_dev, p_dev->queue_indices.transfer.family,
16 vkw::CmdPoolFlag::MANUAL_CMD_RESET);
17 if (res != VK_SUCCESS)
18 goto CCC;
19
20 res = syncs.init(*p_dev);
21 if (res != VK_SUCCESS)
22 goto CTC;
23
24 wait_transfer_write = wait_transfer_read = wait_compute = false;
25 return VK_SUCCESS;
26
27 CTC:
28 transfer_cmds.destroy(*p_dev);
29 CCC:
30 compute_cmds.destroy(*p_dev);
31 return res;
32 }
33 void destroy(vk::Device *p_dev) {
34 transfer_cmds.destroy(*p_dev);
35 compute_cmds.destroy(*p_dev);
36 syncs.destroy(*p_dev);
37 }
38
39 struct SyncCounts : vk::SyncContainerCounts {
40 constexpr static const uint32_t FENCES = 2;
41 constexpr static const uint32_t SEMAPHORES = 2;
42 };
43
44 vk::CmdPoolContainer<1, 0> compute_cmds;
45 vk::CmdPoolContainer<2, 0> transfer_cmds;
46 vk::SyncContainer<SyncCounts> syncs;
47 bool wait_transfer_write;
48 bool wait_compute;
49 bool wait_transfer_read;
50 };
51 }
File src/compute/compute.cpp changed (mode: 100644) (index 5f141ac..b1cb5c2)
1 #include "compute.h"
2
3 #include "binding_set.h"
4 #include "pipeline.h"
5 #include "cmd_unit.h"
1 #include <jen/compute.h>
2 #include "../device/device.h"
6 3
7 4 using namespace jen; using namespace jen;
8 using namespace jen::vk;
9 5 using namespace jen::compute; using namespace jen::compute;
10 6
11 [[nodiscard]] Result ModuleCompute::init(Device *p_dev) {
12 p_device = p_dev;
13 return VK_SUCCESS;
7 [[nodiscard]] Result ModuleCompute::
8 create_pipeline(const Bindings &bi, const char *p_shader_file_path,
9 vkw::ShaderSpecialization *p_spec, Pipeline *p_dst)
10 {
11 vkw::Device dev = *p_device;
12 Result res;
13
14 res = p_dst->shader.init(dev, p_shader_file_path);
15 if (res != VK_SUCCESS)
16 return res;
17
18 jl::array<vkw::DescrBind,256> dbinds;
19 uint32_t numBinds = 0;
20 auto put_part = [&numBinds, &dbinds] (vkw::DescrType dt, auto part) {
21 for (auto &b : part)
22 dbinds[numBinds++] = vkw::DescrBind::compute(b.binding, dt, 1);
23 };
24 put_part(vkw::DescrType::UNIFORM_TEXEL_BUFFER, bi.uniform_texel_buffer);
25 put_part(vkw::DescrType::STORAGE_TEXEL_BUFFER, bi.storage_texel_buffer);
26 put_part(vkw::DescrType::UNIFORM_BUFFER, bi.uniform_buffer);
27 put_part(vkw::DescrType::STORAGE_BUFFER, bi.storage_buffer);
28 put_part(vkw::DescrType::STORAGE_IMAGE, bi.storage_image);
29
30 res = p_dst->setLayout.init(dev, {dbinds.begin(), numBinds});
31 if (res != VK_SUCCESS)
32 goto CSH;
33
34 res = p_dst->layout.init(dev, p_dst->setLayout);
35 if (res != VK_SUCCESS)
36 goto CSL;
37
38 res = p_dst->pipeline.init(dev, vkw::PipelineCompute{
39 .stage = {vkw::ShaderStage::COMPUTE, p_dst->shader, p_spec},
40 .layout = p_dst->layout
41 });
42 if (res != VK_SUCCESS)
43 goto CL;
44
45 return res;
46
47 CL:
48 p_dst->layout.destroy(dev);
49 CSL:
50 p_dst->setLayout.destroy(dev);
51 CSH:
52 p_dst->shader.destroy(dev);
53 return res;
54 }
55 void ModuleCompute::
56 destroy_pipeline(Pipeline *p_pl) {
57 vkw::Device d = *p_device;
58 p_pl->pipeline.destroy(d);
59 p_pl->layout.destroy(d);
60 p_pl->setLayout.destroy(d);
61 p_pl->shader.destroy(d);
14 62 } }
15 void ModuleCompute::destroy() {}
16 63
17 [[nodiscard]] Result ModuleCompute::
18 create_cmdUnit(CmdUnit **pp_dst) {
19 CmdUnit *&p = *pp_dst;
20 if (not jl::allocate(&p))
21 return VK_ERROR_OUT_OF_HOST_MEMORY;
64 [[nodiscard]] Result
65 init(Device *p_d, BindingBuffer *p, const BindingCreateInfo &info) {
66 DevMemUsage mem_use;
67 bool map = info.use & BindingUseFlag::TRANSFER_DST
68 or
69 info.use & BindingUseFlag::TRANSFER_SRC;
70
71 if (info.use & BindingUseFlag::UNIFORM_TEXEL
72 or
73 info.use & BindingUseFlag::UNIFORM)
74 mem_use = DevMemUsage::DYNAMIC_DST;
75 else
76 mem_use = DevMemUsage::STATIC;
22 77
23 Result res = p->init(p_device);
78 p->binding = info.bindingNo;
79
80 Result res;
81 res = p_d->buffer_allocator
82 .allocate(info.size, 0, mem_use, info.use, map, &p->part);
24 83 if (res != VK_SUCCESS) if (res != VK_SUCCESS)
25 jl::deallocate(&p);
84 return res;
85 p->use_staging = map and not p->part.is_mapped();
86 if (p->use_staging) {
87 if (info.use & BindingUseFlag::TRANSFER_DST)
88 mem_use = DevMemUsage::STAGING_STATIC_DST;
89 else
90 mem_use = DevMemUsage::STAGING_SRC;
91
92 vkw::BufferUsageMask usage = info.use
93 | BindingUseFlag::TRANSFER_DST
94 | BindingUseFlag::TRANSFER_SRC;
95 res = p_d->buffer_allocator
96 .allocate(info.size, 0, mem_use, usage, true, &p->staging);
97 if (res != VK_SUCCESS)
98 p_d->buffer_allocator.deallocate(p->part);
99 }
26 100 return res; return res;
27 101 } }
28 [[nodiscard]] Result ModuleCompute::
29 create_pipeline(const Bindings &bi, const char *p_shader_file_path,
30 vkw::ShaderSpecialization *p_spec, Pipeline *p_dst) {
31 return p_dst->init(p_device->device, bi, p_shader_file_path, p_spec);
102 void destroy(Device *p_d, BindingBuffer *p) {
103 if (p->use_staging)
104 p_d->buffer_allocator.deallocate(p->staging);
105 p_d->buffer_allocator.deallocate(p->part);
32 106 } }
107
108 [[nodiscard]] Result
109 init(Device *p_d, BindingBufferView *p,
110 const BindingCreateInfo &info, VkFormat format) {
111 Result res;
112 res = init(p_d, static_cast<BindingBuffer*>(p), info);
113 if (res != VK_SUCCESS)
114 return res;
115 res = p->view.init(*p_d, p->part.buffer, format,
116 p->part.offset(), p->part.size());
117 if (res != VK_SUCCESS)
118 p_d->buffer_allocator.deallocate(p->part);
119 return res;
120 }
121 void destroy(Device *p_d, BindingBufferView *p) {
122 p->view.destroy(*p_d);
123 p_d->buffer_allocator.deallocate(p->part);
124 }
125
33 126 [[nodiscard]] Result ModuleCompute:: [[nodiscard]] Result ModuleCompute::
34 127 create_bindings(BindingCreateInfos infos, BindingBuffer *p_dst) { create_bindings(BindingCreateInfos infos, BindingBuffer *p_dst) {
35 128 Result res; Result res;
36 129 for (uint32_t i = 0; i < infos.count32(); ++i) { for (uint32_t i = 0; i < infos.count32(); ++i) {
37 res = p_dst[i].init(p_device, infos[i]);
130 res = init(p_device, &p_dst[i], infos[i]);
38 131 if (res != VK_SUCCESS) { if (res != VK_SUCCESS) {
39 132 while (i > 0) while (i > 0)
40 p_dst[i].destroy(p_device);
133 destroy(p_device, &p_dst[i]);
41 134 return res; return res;
42 135 } }
43 136 } }
 
... ... create_bindings(BindingCreateInfos infos, VkFormat *p_formats,
48 141 BindingBufferView *p_dst) { BindingBufferView *p_dst) {
49 142 Result res; Result res;
50 143 for (uint32_t i = 0; i < infos.count32(); ++i) { for (uint32_t i = 0; i < infos.count32(); ++i) {
51 res = p_dst[i].init(p_device, infos[i], p_formats[i]);
144 res = init(p_device, p_dst + i, infos[i], p_formats[i]);
52 145 if (res != VK_SUCCESS) { if (res != VK_SUCCESS) {
53 146 while (i > 0) while (i > 0)
54 p_dst[i].destroy(p_device);
147 destroy(p_device, p_dst + i);
55 148 return res; return res;
56 149 } }
57 150 } }
58 151 return VK_SUCCESS; return VK_SUCCESS;
59 152 } }
60 153
61 [[nodiscard]] Result ModuleCompute::
62 create_images(ImageCreateInfos infos, Image *p_dst) {
63 Result res;
64 for (uint32_t i = 0; i < infos.count32(); ++i) {
65 res = p_dst[i].init(p_device, infos[i]);
66 if (res != VK_SUCCESS) {
67 while (i > 0)
68 p_dst[i].destroy(p_device);
69 return res;
70 }
154 [[nodiscard]] Result
155 init(Device *p_d, Image *p, const ImageCreateInfo &info) {
156 GpuImageInfo ii; {
157 ii.extent = {info.extent.x, info.extent.y, info.extent.z};
158 ii.layer_count = info.layer_count;
159 ii.mip_level_count = info.mip_level_count;
160 ii.format = info.format;
161 ii.type = info.type;
162 ii.samples = info.samples;
163 ii.usage = info.usage;
164 ii.flags = {};
165 ii.tiling = vkw::Tiling::OPTIMAL;
71 166 } }
72 return VK_SUCCESS;
73 }
74 [[nodiscard]] Result ModuleCompute::
75 create_bindingSet(const Pipeline &pipeline, const Bindings &bindings,
76 BindingsSet *p_dst) {
77 return p_dst->init(p_device, pipeline.setLayout, bindings);
78 }
167 GpuImageViewInfo vi; {
168 vi.type = vkw::ImViewType(info.type);
169 vi.aspect = vkw::ImAspect::COLOR;
170 }
171 Result res = p->image.init(p_d, &ii, &vi);
172 if (res != VK_SUCCESS)
173 return res;
79 174
80 void check_transfer(const jen::vk::DeviceBufferPart &part,
81 vkw::DeviceSize offset, vkw::DeviceSize size) {
82 jassert(offset + size <= part.size(), "region exceeds buffer");
83 jassert(part.is_mapped(), "cannot access memory");
84 jassert(not part.is_flush_needed(), "flush not supported");
85 }
175 VkDeviceSize size = vkw::format_size(ii.format) * ii.extent.volume();
176 size *= ii.layer_count;
177 DevMemUsage mem_use = DevMemUsage::STAGING_STATIC_DST;
86 178
87 void
88 write_to_allocation(void *p_src, jen::vk::DeviceBufferPart *p_dst,
89 vkw::DeviceSize dst_offset, vkw::DeviceSize size) {
90 check_transfer(*p_dst, dst_offset, size);
91 memcpy(p_dst->p_data() + dst_offset, p_src, size);
92 }
179 res = p_d->buffer_allocator
180 .allocate(size, 0, mem_use, vkw::BufferUsage::TRANSFER_SRC
181 | vkw::BufferUsage::TRANSFER_DST, true, &p->staging);
182 if (res != VK_SUCCESS)
183 p->image.destroy(p_d);
93 184
94 void
95 read_from_allocation(jen::vk::DeviceBufferPart *p_src, void *p_dst,
96 vkw::DeviceSize src_offset, vkw::DeviceSize size) {
97 check_transfer(*p_src, src_offset, size);
98 memcpy(p_dst, p_src->p_data() + src_offset, size);
185 p->format = info.format;
186 p->layout = vkw::ImLayout::UNDEFINED;
187 p->mip_level_count = info.mip_level_count;
188 p->layer_count = info.layer_count;
189 return res;
190 }
191 void destroy(Device *p_d, Image *p) {
192 p_d->buffer_allocator.deallocate(p->staging);
193 p->image.destroy(p_d);
99 194 } }
100 195
101 [[nodiscard]] jen::vk::Result
102 proceed_writes(jen::vk::Device *p_device,
103 CmdUnit *p_cmdUnit,
104 BufferTransfers buffer_writes,
105 ImagesTransfers images_writes)
106 {
107 auto &cmd = p_cmdUnit->transfer_cmds.primary[0];
108
109 auto begin = [&cmd, &p_cmdUnit]() -> jen::vk::Result {
110 if (not p_cmdUnit->wait_transfer_write) {
111 jen::vk::Result res;
112 res = cmd.begin(vkw::CmdUsage::ONE_TIME_SUBMIT);
113 if (res != VK_SUCCESS)
114 return res;
115 p_cmdUnit->wait_transfer_write = true;
116 }
117 return VK_SUCCESS;
118 };
119
120 for (uint32_t i = 0; i < buffer_writes.count(); ++i) {
121 auto &write = buffer_writes[i];
122 auto &buffer = *write.p_buffer;
123
124 jen::vk::DeviceBufferPart *p_part;
125 if (buffer.use_staging)
126 p_part = &buffer.staging;
127 else
128 p_part = &buffer.part;
129
130 write_to_allocation(write.p_data, p_part, write.offset, write.size);
131
132 if (buffer.use_staging) {
133 vkw::BufferChange bs;
134 bs.src = buffer.staging.buffer;
135 bs.dst = buffer.part.buffer;
136 vkw::BufferRegion region;
137 region.offsets.src = buffer.staging.offset();
138 region.offsets.dst = buffer.part.offset();
139 region.size = write.size;
140 auto res = begin();
141 if (res != VK_SUCCESS)
142 return res;
143 cmd.cmd_cp_buffer(bs, region);
144 }
145 }
146
147 for (uint32_t i = 0; i < images_writes.count(); ++i) {
148 auto res = begin();
149 if (res != VK_SUCCESS)
196 [[nodiscard]] Result ModuleCompute::
197 create_images(ImageCreateInfos infos, Image *p_dst) {
198 Result res;
199 for (uint32_t i = 0; i < infos.count32(); ++i) {
200 res = init(p_device, p_dst + i, infos[i]);
201 if (res != VK_SUCCESS) {
202 while (i > 0)
203 destroy(p_device, p_dst + i);
150 204 return res; return res;
151
152 auto &w = images_writes[i];
153 auto &im = *w.p_image;
154
155 if (im.layout != vkw::ImLayout::TRANSFER_DST) {
156 vkw::StageMaskChange stages;
157 stages.src = vkw::StageFlag::TOP_OF_PIPE;
158 stages.dst = vkw::StageFlag::TRANSFER;
159 im.transitionLayout(&cmd, vkw::ImLayout::TRANSFER_DST, stages);
160 205 } }
161
162 vkw::DeviceSize offset = 0;
163 for (auto &r : w.transfers) {
164 auto size = r.extent.volume() * vkw::format_size(im.format)
165 * r.layer_count;
166 write_to_allocation(r.p_data, &im.staging, offset, size);
167
168 vkw::BufferAndImageRegion region; {
169 region.bufferOffset = im.staging.offset() + offset;
170 region.bufferRowLength = region.bufferImageHeight = 0;
171 region.imageSubresource = {
172 vkw::ImAspect::COLOR,
173 r.mip_level,
174 r.layer_offset,
175 r.layer_count
176 };
177 region.imageOffset.x = int32_t(r.offset.x);
178 region.imageOffset.y = int32_t(r.offset.y);
179 region.imageOffset.z = int32_t(r.offset.z);
180 region.imageExtent.width = r.extent.x;
181 region.imageExtent.height = r.extent.y;
182 region.imageExtent.depth = r.extent.z;
183 }
184 cmd.cmd_cp_buffer_to_image({im.staging.buffer, im.image.image},
185 region, vkw::ImLayout::TRANSFER_DST);
186
187 offset += size;
188 }
189 }
190
191 if (p_cmdUnit->wait_transfer_write) {
192 jen::vk::Result res;
193 res = cmd.end();
194 if (res != VK_SUCCESS)
195 return res;
196 vkw::QueueSignal signal(p_cmdUnit->syncs.semaphores[0].p_vk);
197 vkw::QueueSubmit submit(cmd, {}, signal);
198 res = p_device->queues.transfer.submit_locked(submit);
199 if (res != VK_SUCCESS)
200 return res;
201
202 for (uint32_t i = 0; i < images_writes.count(); ++i)
203 images_writes[i].p_image->layout = vkw::ImLayout::TRANSFER_DST;
204 206 } }
205
206 207 return VK_SUCCESS; return VK_SUCCESS;
207 208 } }
208 209
209 210 [[nodiscard]] Result [[nodiscard]] Result
210 proceed_staging_reads(Device *p_device,
211 CmdUnit *p_cmdUnit,
212 BufferTransfers buffer_reads,
213 ImagesTransfers images_reads)
211 init(Device *p_dev, BindingsSet *p, vkw::DescrLayout setLayout,
212 const Bindings &bi)
214 213 { {
215 auto &cmd = p_cmdUnit->transfer_cmds.primary[1];
216 auto begin = [&cmd, &p_cmdUnit]() -> jen::vk::Result {
217 if (not p_cmdUnit->wait_transfer_read) {
218 jen::vk::Result res;
219 res = cmd.begin(vkw::CmdUsage::ONE_TIME_SUBMIT);
220 if (res != VK_SUCCESS)
221 return res;
222 p_cmdUnit->wait_transfer_read = true;
214 uint32_t numPoolPart = 0;
215 uint32_t numSets = 0;
216 jl::array<vkw::DescrPoolPart,4> pool_parts;
217 auto put_part = [&numSets, &numPoolPart, &pool_parts]
218 (vkw::DescrType dt, auto part) {
219 if (part.count32() > 0) {
220 pool_parts[numPoolPart].type = dt;
221 numSets += pool_parts[numPoolPart].count = part.count32();
222 ++numPoolPart;
223 223 } }
224 return VK_SUCCESS;
225 224 }; };
225 put_part(vkw::DescrType::UNIFORM_TEXEL_BUFFER, bi.uniform_texel_buffer);
226 put_part(vkw::DescrType::STORAGE_TEXEL_BUFFER, bi.storage_texel_buffer);
227 put_part(vkw::DescrType::UNIFORM_BUFFER, bi.uniform_buffer);
228 put_part(vkw::DescrType::STORAGE_BUFFER, bi.storage_buffer);
229 put_part(vkw::DescrType::STORAGE_IMAGE, bi.storage_image);
226 230
227 for (uint32_t i = 0; i < buffer_reads.count(); ++i) {
228 auto &read = buffer_reads[i];
229 auto &buffer = *read.p_buffer;
230
231 if (buffer.use_staging) {
232 vkw::BufferChange bs;
233 bs.src = buffer.part.buffer;
234 bs.dst = buffer.staging.buffer;
235 vkw::BufferRegion region;
236 region.offsets.src = buffer.part.offset();
237 region.offsets.dst = buffer.staging.offset();
238 region.size = read.size;
239 auto res = begin();
240 if (res != VK_SUCCESS)
241 return res;
242 cmd.cmd_cp_buffer(bs, region);
243 }
244 }
245
246 for (uint32_t i = 0; i < images_reads.count(); ++i) {
247 auto res = begin();
248 if (res != VK_SUCCESS)
249 return res;
250
251 auto &w = images_reads[i];
252 auto &im = *w.p_image;
253
254 if (im.layout != vkw::ImLayout::TRANSFER_SRC) {
255 vkw::StageMaskChange stages;
256 stages.src = vkw::StageFlag::TOP_OF_PIPE;
257 stages.dst = vkw::StageFlag::TRANSFER;
258 im.transitionLayout(&cmd, vkw::ImLayout::TRANSFER_SRC, stages);
259 }
260
261 vkw::DeviceSize offset = 0;
262 for (auto &r : w.transfers) {
263 vkw::BufferAndImageRegion region; {
264 region.bufferOffset = im.staging.offset() + offset;
265 region.bufferRowLength = region.bufferImageHeight = 0;
266 region.imageSubresource = {
267 vkw::ImAspect::COLOR,
268 r.mip_level,
269 r.layer_offset,
270 r.layer_count
271 };
272 region.imageOffset.x = int32_t(r.offset.x);
273 region.imageOffset.y = int32_t(r.offset.y);
274 region.imageOffset.z = int32_t(r.offset.z);
275 region.imageExtent.width = r.extent.x;
276 region.imageExtent.height = r.extent.y;
277 region.imageExtent.depth = r.extent.z;
278 }
279 cmd.cmd_cp_image_to_buffer({im.image.image, im.staging.buffer},
280 region, vkw::ImLayout::TRANSFER_SRC);
281
282 offset += r.extent.volume() * vkw::format_size(im.format) * r.layer_count;
283 }
284 }
285
286 if (p_cmdUnit->wait_transfer_read) {
287 p_cmdUnit->wait_compute = false;
288 jen::vk::Result res;
289 res = p_cmdUnit->transfer_cmds.primary[1].end();
290 if (res != VK_SUCCESS)
291 return res;
292 vkw::QueueWait wait;
293 wait.semaphores = p_cmdUnit->syncs.semaphores[1].p_vk;
294 wait.stage_masks = vkw::StageFlag::COMPUTE_SHADER;
295 vkw::QueueSubmit submit(cmd, wait);
296 res = p_device->queues.transfer
297 .submit_locked(submit, p_cmdUnit->syncs.fences[1]);
298 if (res != VK_SUCCESS)
299 return res;
300
301 for (uint32_t i = 0; i < images_reads.count(); ++i)
302 images_reads[i].p_image->layout = vkw::ImLayout::TRANSFER_SRC;
303 }
304
305 return VK_SUCCESS;
306 }
307
308 [[nodiscard]] jen::vk::Result
309 wait_unit(jen::vk::Device *p_dev, CmdUnit *p_u) {
310 jen::vk::Result res;
311 if (p_u->wait_compute) {
312 res = p_u->syncs.fences[0].wait_and_reset(*p_dev, vkw::TIMEOUT_INFINITE);
313 if (res != VK_SUCCESS)
314 return res;
315 p_u->wait_compute = false;
316 }
317 if (p_u->wait_transfer_read) {
318 res = p_u->syncs.fences[1].wait_and_reset(*p_dev, vkw::TIMEOUT_INFINITE);
319 if (res != VK_SUCCESS)
320 return res;
321 p_u->wait_transfer_read = false;
322 }
323 return VK_SUCCESS;
324 }
325 [[nodiscard]] jen::vk::Result
326 status_unit(jen::vk::Device *p_dev, CmdUnit *p_u) {
327 jen::vk::Result res;
328 if (p_u->wait_compute) {
329 res = p_u->syncs.fences[0].status(*p_dev);
330 if (res != VK_SUCCESS)
331 return res;
332 }
333 if (p_u->wait_transfer_read) {
334 res = p_u->syncs.fences[1].status(*p_dev);
335 if (res != VK_SUCCESS)
336 return res;
337 }
338 return VK_SUCCESS;
339 }
340
341 [[nodiscard]] Result
342 check_computeInfo(const Device &device,
343 const ComputeInfo &info) {
344 for (int i = 0; i < 3; ++i)
345 if (info.group_count[i] >
346 device.properties.limits.maxComputeWorkGroupCount[i]) {
347 fprintf(stderr, "ComputeInfo.group_count[%i] "
348 "exceeds device limit maxComputeWorkGroupCount[%i]="
349 "%i. "
350 "Max of 65535 is recommended because it is "
351 "minimum possible supported value\n", i, i,
352 device.properties.limits.maxComputeWorkGroupCount[i]);
353 return vkw::ERROR_INVALID_USAGE;
354 }
355 return VK_SUCCESS;
356 }
357
358 [[nodiscard]] Result ModuleCompute::
359 compute(const ComputeInfo &info)
360 {
361 231 Result res; Result res;
362 res = check_computeInfo(*p_device, info);
232 res = p->pool.init(*p_dev, {}, {pool_parts.begin(), numPoolPart}, numSets);
363 233 if (res != VK_SUCCESS) if (res != VK_SUCCESS)
364 234 return res; return res;
365 235
366 res = wait_unit(p_device, info.p_cmdUnit);
367 if (res != VK_SUCCESS)
236 res = p->pool.allocate_set(p_dev->device, setLayout, &p->set);
237 if (res != VK_SUCCESS) {
238 p->pool.destroy(p_dev->device);
368 239 return res; return res;
369
370 res = proceed_writes(p_device, info.p_cmdUnit,
371 info.buffer_writes, info.images_writes);
372 if (res != VK_SUCCESS)
373 return res;
374
375 auto &syncs = info.p_cmdUnit->syncs;
376 auto &cmds = info.p_cmdUnit->compute_cmds;
377 auto &pipeline = info.p_pipeline->pipeline;
378 auto &pipelineLayout = info.p_pipeline->layout;
379 auto &set = info.p_bindingsSet->set;
380
381 auto &cmd = cmds.primary[0];
382 res = cmd.begin(vkw::CmdUsage::ONE_TIME_SUBMIT);
383 if (res != VK_SUCCESS)
384 return res;
385
386 for (auto &im : info.p_bindings->storage_image) {
387 auto l = vkw::ImLayout::GENERAL;
388 if (im.p_image->layout == l)
389 continue;
390 vkw::StageMaskChange stages;
391 stages.src = vkw::StageFlag::TOP_OF_PIPE;
392 stages.dst = vkw::StageFlag::COMPUTE_SHADER;
393 im.p_image->transitionLayout(&cmd, l, stages);
394 240 } }
395 241
396 cmd.cmd_set_pipeline(pipeline, vkw::BindPoint::COMPUTE);
397
398 cmd.cmd_set_descr_sets(vkw::BindPoint::COMPUTE, pipelineLayout, set, 0);
399 cmd.cmd_dispatch(*reinterpret_cast<const vkw::Vector3D*>(&info.group_count));
400
401 res = cmd.end();
402 if (res != VK_SUCCESS)
403 return res;
404
405 bool use_read_semaphore = false;
406 if (info.images_reads.count() > 0)
407 use_read_semaphore = true;
408 else for (uint32_t i = 0; i < info.buffer_reads.count(); ++i) {
409 if (info.buffer_reads[i].p_buffer->use_staging) {
410 use_read_semaphore = true;
411 break;
412 }
413 }
414
415 vkw::QueueWait wait;
416 if (info.p_cmdUnit->wait_transfer_write) {
417 wait.semaphores = syncs.semaphores[0].p_vk;
418 wait.stage_masks = vkw::StageFlag::TRANSFER;
419 }
420 else
421 wait = {};
422 vkw::QueueSignal signal;
423 if (use_read_semaphore)
424 signal = syncs.semaphores[1].p_vk;
425 else
426 signal = {};
427 vkw::QueueSubmit submit(cmd, wait, signal);
428
429 res = p_device->queues.compute.submit_locked(submit, syncs.fences[0]);
430 if (res != VK_SUCCESS)
431 return res;
242 auto &set = p->set;
243 auto set_views = [&set, p_dev] (vkw::DescrType dt, auto sets) {
244 for (auto &b : sets)
245 set.set(*p_dev, b.binding, dt, b.view);
246 };
247 set_views(vkw::DescrType::UNIFORM_TEXEL_BUFFER, bi.uniform_texel_buffer);
248 set_views(vkw::DescrType::STORAGE_TEXEL_BUFFER, bi.storage_texel_buffer);
432 249
433 for (auto &im : info.p_bindings->storage_image) {
434 auto l = vkw::ImLayout::GENERAL;
435 im.p_image->layout = l;
250 auto set_buffers = [&set, p_dev] (vkw::DescrType dt, auto sets) {
251 for (auto &b : sets)
252 set.set(*p_dev, b.binding, dt, b.part.range());
253 };
254 set_buffers(vkw::DescrType::UNIFORM_BUFFER, bi.uniform_buffer);
255 set_buffers(vkw::DescrType::STORAGE_BUFFER, bi.storage_buffer);
256
257 for (auto &b : bi.storage_image) {
258 vkw::DescrImage des;
259 des.sampler = {};
260 des.imageView = b.p_image->image.view;
261 des.imageLayout = vkw::ImLayout::GENERAL;
262 set.set(*p_dev, b.binding, vkw::DescrType::STORAGE_IMAGE, des);
436 263 } }
437 264
438 info.p_cmdUnit->wait_compute = true;
439
440 res = proceed_staging_reads(p_device, info.p_cmdUnit,
441 info.buffer_reads, info.images_reads);
442 if (res != VK_SUCCESS)
443 return res;
444
445 265 return res; return res;
446 266 } }
447
448 [[nodiscard]] Result ModuleCompute::
449 read(CmdUnit *p_cmdUnit,
450 BufferTransfers buffer_reads, ImagesTransfers images_reads)
451 {
452 Result res;
453 res = wait_unit(p_device, p_cmdUnit);
454 if (res != VK_SUCCESS)
455 return res;
456
457 for (uint32_t i = 0; i < buffer_reads.count(); ++i) {
458 auto &read = buffer_reads[i];
459 auto &buffer = *read.p_buffer;
460
461 jen::vk::DeviceBufferPart *p_part;
462 if (buffer.use_staging)
463 p_part = &buffer.staging;
464 else
465 p_part = &buffer.part;
466
467 read_from_allocation(p_part, read.p_data, read.offset, read.size);
468 }
469
470 for (uint32_t i = 0; i < images_reads.count(); ++i) {
471 auto &read = images_reads[i];
472 auto &im = *read.p_image;
473 auto p_part = &im.staging;
474
475 vkw::DeviceSize offset = 0;
476 for (auto &r : read.transfers) {
477 auto size = r.extent.volume() * vkw::format_size(im.format)
478 * r.layer_count;
479 read_from_allocation(p_part, r.p_data, offset, size);
480 offset += size;
481 }
482 }
483
484 return VK_SUCCESS;
267 void destroy(Device *p_dev, BindingsSet *p) {
268 p->pool.destroy(*p_dev);
485 269 } }
486 270
487 271 [[nodiscard]] Result ModuleCompute:: [[nodiscard]] Result ModuleCompute::
488 status_compute(CmdUnit *p_cmd) {
489 return status_unit(p_device, p_cmd);
272 create_bindingSet(const Pipeline &pipeline, const Bindings &bindings,
273 BindingsSet *p_dst) {
274 return init(p_device, p_dst, pipeline.setLayout, bindings);
490 275 } }
491 276
277
492 278 void ModuleCompute:: void ModuleCompute::
493 279 destroy_bindingSet(BindingsSet *p_set) { destroy_bindingSet(BindingsSet *p_set) {
494 p_set->destroy(p_device);
280 destroy(p_device, p_set);
495 281 } }
496 282 void ModuleCompute:: void ModuleCompute::
497 283 destroy_bindings(BindingBuffer *p_bs, uint32_t count) { destroy_bindings(BindingBuffer *p_bs, uint32_t count) {
498 284 for (uint32_t i = 0; i < count; ++i) for (uint32_t i = 0; i < count; ++i)
499 p_bs[i].destroy(p_device);
285 destroy(p_device, p_bs + i);
500 286 } }
501 287 void ModuleCompute:: void ModuleCompute::
502 288 destroy_bindings(BindingBufferView *p_bs, uint32_t count) { destroy_bindings(BindingBufferView *p_bs, uint32_t count) {
503 289 for (uint32_t i = 0; i < count; ++i) for (uint32_t i = 0; i < count; ++i)
504 p_bs[i].destroy(p_device);
290 destroy(p_device, p_bs + i);
505 291 } }
506 292 void ModuleCompute:: void ModuleCompute::
507 293 destroy_images(Image *p_ims, uint32_t count) { destroy_images(Image *p_ims, uint32_t count) {
508 294 for (uint32_t i = 0; i < count; ++i) for (uint32_t i = 0; i < count; ++i)
509 p_ims[i].destroy(p_device);
510 }
511 void ModuleCompute::
512 destroy_pipeline(Pipeline *p_pl) {
513 p_pl->destroy(p_device->device);
514 }
515
516 void ModuleCompute::destroy_cmdUnit(CmdUnit *p) {
517 p->destroy(p_device);
518 jl::deallocate(&p);
295 destroy(p_device, p_ims + i);
519 296 } }
File src/compute/compute.h deleted (index 5e269a2..0000000)
1 #pragma once
2
3 #include "binding_set.h"
4 #include "pipeline.h"
5 #include "cmd_unit.h"
6
7 namespace jen::compute
8 {
9 struct ImageTransfer {
10 uint32_t mip_level;
11 uint32_t layer_offset;
12 uint32_t layer_count;
13 math::v3u32 offset;
14 math::v3u32 extent;
15 void *p_data;
16 };
17 struct ImageTransfers {
18 Image *p_image;
19 jl::rarray<const ImageTransfer> transfers;
20 };
21 using ImagesTransfers = jl::rarray<const ImageTransfers>;
22
23 struct BufferTransfer {
24 BindingBuffer *p_buffer;
25 vkw::DeviceSize offset;
26 vkw::DeviceSize size;
27 void *p_data;
28 };
29 using BufferTransfers = jl::rarray<const BufferTransfer>;
30
31 struct ComputeInfo {
32 CmdUnit *p_cmdUnit;
33 Pipeline *p_pipeline;
34 BindingsSet *p_bindingsSet;
35 Bindings *p_bindings;
36 math::v3u32 group_count;
37 BufferTransfers buffer_writes;
38 BufferTransfers buffer_reads;
39 ImagesTransfers images_writes;
40 ImagesTransfers images_reads;
41 };
42
43 constexpr static const uint32_t MAX_WORKGROUP_COUNT = 65535;
44 }
45 namespace jen
46 {
47 struct ModuleCompute
48 {
49 [[nodiscard]] Result init(vk::Device *p_dev);
50 void destroy();
51
52 [[nodiscard]] Result create_cmdUnit(compute::CmdUnit **pp_dst);
53
54 [[nodiscard]] Result
55 create_pipeline(const compute::Bindings &bi, const char *p_shader_file_path,
56 vkw::ShaderSpecialization *p_specialization,
57 compute::Pipeline *p_dst);
58 [[nodiscard]] Result
59 create_bindings(compute::BindingCreateInfos infos,
60 compute::BindingBuffer *p_dst);
61 [[nodiscard]] Result
62 create_bindings(compute::BindingCreateInfos infos, VkFormat *p_formats,
63 compute::BindingBufferView *p_dst);
64 [[nodiscard]] Result
65 create_images(compute::ImageCreateInfos infos, compute::Image *p_dst);
66
67 [[nodiscard]] Result
68 create_bindingSet(const compute::Pipeline &pipeline,
69 const compute::Bindings &bindings,
70 compute::BindingsSet *p_dst);
71
72 [[nodiscard]] Result compute(const compute::ComputeInfo&);
73
74 [[nodiscard]] Result
75 read(compute::CmdUnit *p_cmdUnit,
76 compute::BufferTransfers buffer_reads,
77 compute::ImagesTransfers image_reads);
78
79 [[nodiscard]] Result status_compute(compute::CmdUnit *p_cmd);
80
81 void destroy_bindingSet(compute::BindingsSet *p_set);
82 void destroy_bindings(compute::BindingBuffer *p_bs, uint32_t count = 1);
83 void destroy_bindings(compute::BindingBufferView *p_bs, uint32_t count = 1);
84 void destroy_images(compute::Image *p_ims, uint32_t count = 1);
85 void destroy_pipeline(compute::Pipeline *p_pl);
86 void destroy_cmdUnit(compute::CmdUnit *p);
87
88 vk::Device *p_device;
89 };
90 }
File src/compute/pipeline.h deleted (index 237e88a..0000000)
1 #pragma once
2
3 #include "binding_set.h"
4
5 namespace jen::compute {
6 struct Pipeline
7 {
8 [[nodiscard]] Result
9 init(vkw::Device device, const Bindings &bi, const char *p_shader_filepath,
10 const vkw::ShaderSpecialization *p_specialization)
11 {
12 Result res;
13
14 res = shader.init(device, p_shader_filepath);
15 if (res != VK_SUCCESS)
16 return res;
17
18 jl::array<vkw::DescrBind,256> dbinds;
19 uint32_t numBinds = 0;
20 auto put_part = [&numBinds, &dbinds] (vkw::DescrType dt, auto part) {
21 for (auto &b : part)
22 dbinds[numBinds++] = vkw::DescrBind::compute(b.binding, dt, 1);
23 };
24 put_part(vkw::DescrType::UNIFORM_TEXEL_BUFFER, bi.uniform_texel_buffer);
25 put_part(vkw::DescrType::STORAGE_TEXEL_BUFFER, bi.storage_texel_buffer);
26 put_part(vkw::DescrType::UNIFORM_BUFFER, bi.uniform_buffer);
27 put_part(vkw::DescrType::STORAGE_BUFFER, bi.storage_buffer);
28 put_part(vkw::DescrType::STORAGE_IMAGE, bi.storage_image);
29
30 res = setLayout.init(device, {dbinds.begin(), numBinds});
31 if (res != VK_SUCCESS)
32 goto CSH;
33
34 res = layout.init(device, setLayout);
35 if (res != VK_SUCCESS)
36 goto CSL;
37
38 res = pipeline.init(device, vkw::PipelineCompute{
39 .stage = {vkw::ShaderStage::COMPUTE, shader, p_specialization},
40 .layout = layout
41 });
42 if (res != VK_SUCCESS)
43 goto CL;
44
45 return res;
46
47 CL:
48 layout.destroy(device);
49 CSL:
50 setLayout.destroy(device);
51 CSH:
52 shader.destroy(device);
53 return res;
54 }
55
56 void destroy(vkw::Device d) {
57 pipeline.destroy(d);
58 layout.destroy(d);
59 setLayout.destroy(d);
60 shader.destroy(d);
61 }
62
63 vkw::ShaderModule shader;
64 vkw::Pipeline pipeline;
65 vkw::PipelineLayout layout;
66 vkw::DescrLayout setLayout;
67 };
68 }
File src/configuration.h.in added (mode: 100644) (index 0000000..5a5a018)
1 #pragma once
2 //${JEN_CONFIGURATION_WARNING}
3 #define JEN_NAME "${JEN_NAME}"
4 #define JEN_VERSION_MAJOR ${JEN_VERSION_MAJOR}
5 #define JEN_VERSION_MINOR ${JEN_VERSION_MINOR}
6 #define JEN_VERSION_PATCH ${JEN_VERSION_PATCH}
7 #define JEN_MODULE_GRAPHICS ${JEN_MODULE_GRAPHICS}
8 #define JEN_MODULE_COMPUTE ${JEN_MODULE_COMPUTE}
9 #define JEN_MODULE_RESOURCE_MANAGER ${JEN_MODULE_RESOURCE_MANAGER}
File src/descriptors.cpp added (mode: 100644) (index 0000000..e222bf9)
1 #include <jen/detail/descriptors.h>
2 #include "device/device.h"
3
4 [[nodiscard]] jen::Result jen::DescriptorUniformBuffer::
5 init(Device *p_dev, vkw::DeviceSize size)
6 {
7 vkw::DeviceSize align;
8 align = jl::max(p_dev->properties.limits.minUniformBufferOffsetAlignment,
9 p_dev->properties.limits.nonCoherentAtomSize);
10
11 uint32_t use = vkw::BufferUsage::TRANSFER_DST | vkw::BufferUsage::UNIFORM;
12 Result res;
13 res = p_dev->buffer_allocator.allocate(size, align, DevMemUsage::DYNAMIC_DST,
14 use, true, &allocation);
15 if (res != VK_SUCCESS)
16 return res;
17 isFlushNeeded = (allocation.mem_props & vkw::MemProp::HOST_COHERENT) == 0;
18 return res;
19 }
20 void jen::DescriptorUniformBuffer::
21 destroy(jen::Device *p_dev) {
22 p_dev->buffer_allocator.deallocate(allocation);
23 }
24
25 [[nodiscard]] jen::Result
26 create_buffer(jen::Device *p_dev, jen::DescriptorUniformDynamic *p_set,
27 vkw::DeviceSize size, uint32_t count)
28 {
29 vkw::DeviceSize alignment;
30 alignment = jl::max(p_dev->properties.limits.minUniformBufferOffsetAlignment,
31 p_dev->properties.limits.nonCoherentAtomSize);
32 p_set->single_size = size;
33 p_set->aligned_size = math::round_up(size, alignment);
34 p_set->size = p_set->aligned_size * count;
35
36 uint32_t use = vkw::BufferUsage::TRANSFER_DST | vkw::BufferUsage::UNIFORM;
37 auto res = p_dev->buffer_allocator
38 .allocate(p_set->size, alignment, jen::DevMemUsage::DYNAMIC_DST,
39 use, true, &p_set->allocation);
40 if (res != VK_SUCCESS)
41 return res;
42 p_set->isFlushNeeded = p_set->allocation.is_flush_needed();
43 return res;
44 }
45
46 [[nodiscard]] jen::Result
47 create_set(vkw::Device dev, jen::DescriptorUniformDynamic *p_set,
48 uint32_t bind_no, vkw::DescrPool pool)
49 {
50 jen::Result res;
51 res = pool.allocate_set(dev, p_set->layout, &p_set->set);
52 if (res != VK_SUCCESS)
53 return res;
54 vkw::DescrBuffer info; {
55 info.offset = p_set->allocation.offset();
56 info.size = p_set->allocation.size();
57 info.buffer = p_set->allocation.buffer;
58 }
59 p_set->set.set(dev, bind_no, vkw::DescrType::UNIFORM_BUFFER_DYNAMIC, info);
60 return VK_SUCCESS;
61 }
62
63 [[nodiscard]] jen::Result jen::DescriptorUniformDynamic::
64 init(Device *p_dev, vkw::DeviceSize size,
65 uint32_t count, vkw::DescrBind binding, vkw::DescrPool pool)
66 {
67 Result res;
68 res = layout.init(p_dev->device, binding);
69 if (res != VK_SUCCESS)
70 return res;
71 res = create_buffer(p_dev, this, size, count);
72 if (res != VK_SUCCESS)
73 goto C_LAYOUT;
74 res = create_set(p_dev->device, this, binding.bind_no, pool);
75 if (res != VK_SUCCESS)
76 goto C_BUFFER;
77 return res;
78 C_BUFFER: p_dev->buffer_allocator.deallocate(allocation);
79 C_LAYOUT: layout.destroy(p_dev->device);
80 return res;
81 }
82
83 void jen::DescriptorUniformDynamic::
84 destroy(Device *p_dev, vkw::DescrPool pool) {
85 pool.deallocate_sets(*p_dev, set);
86 layout.destroy(*p_dev);
87 p_dev->buffer_allocator.deallocate(allocation);
88 }
89
90 [[nodiscard]] jen::Result jen::DescriptorUniformDynamic::
91 flush(Device *p_dev, uint32_t index) {
92 if (not isFlushNeeded)
93 return VK_SUCCESS;
94 auto atom = p_dev->properties.limits.nonCoherentAtomSize;
95 vkw::MemoryRange range;
96 if (size <= atom)
97 range = {allocation.memory, allocation.offset(), size};
98 else if (single_size <= atom)
99 range = {allocation.memory, offset(index), atom};
100 else {
101 range = {allocation.memory, allocation.offset(),
102 math::round_up(size, atom)};
103 }
104 return vkw::flush_memory(p_dev->device, range);
105 }
106
107 [[nodiscard]] jen::Result jen::DescriptorTextureAllocator::Pool::
108 init(vkw::Device device) {
109 consumed = 0;
110 vkw::DescrPoolPart part; {
111 part.type = vkw::DescrType::COMBINED_IMAGE_SAMPLER;
112 part.count = MAX;
113 }
114 return pool.init(device, vkw::DescrPool::Flag::FREE_DESCRIPTOR_SET, part, MAX);
115 }
116
117 [[nodiscard]] jen::Result jen::DescriptorTextureAllocator::
118 init(vkw::Device device) {
119 Result res;
120 vkw::DescrBind binding(0, vkw::DescrType::COMBINED_IMAGE_SAMPLER,
121 1, vkw::ShaderStage::FRAGMENT);
122 res = layout.init(device, binding);
123 if (res != VK_SUCCESS) goto CANCEL;
124 if (not pools.init(2))
125 goto C_LAYOUT;
126 if (not lock.init()) {
127 pools.destroy();
128 goto C_LAYOUT;
129 }
130 return res;
131 C_LAYOUT: layout.destroy(device);
132 CANCEL: return res;
133 }
134
135 void jen::DescriptorTextureAllocator::destroy(vkw::Device device) {
136 jassert_soft(pools.count() == 0, "descriptors not cleaned up");
137 pools.destroy([](auto &i, auto dev){ i.destroy(dev);}, device);
138 lock.destroy();
139 layout.destroy(device);
140 }
141
142 [[nodiscard]] jen::Result jen::DescriptorTextureAllocator::
143 create(vkw::Device dev, vkw::Sampler sampler, vkw::ImView view, Set *p_dst) {
144 Result res;
145 uint_fast8_t pool_index = 0;
146 lock.lock();
147 {
148 for (;pool_index < pools.count(); ++pool_index)
149 if (pools[pool_index].consumed < Pool::MAX)
150 goto POOL_READY;
151 if (not pools.insert_dummy())
152 return VK_ERROR_OUT_OF_HOST_MEMORY;
153 res = pools[pool_index].init(dev);
154 if (res != VK_SUCCESS)
155 goto C_ARRAY;
156
157 POOL_READY:
158 res = pools[pool_index].pool.allocate_set(dev, layout, &p_dst->set);
159 if (res != VK_SUCCESS)
160 goto C_POOL;
161 ++pools[pool_index].consumed;
162 p_dst->pool = pools[pool_index].pool;
163
164 vkw::DescrImage info; {
165 info.sampler = sampler;
166 info.imageView = view;
167 info.imageLayout = vkw::ImLayout::SHADER_READ_ONLY;
168 }
169 p_dst->set.set(dev, 0, vkw::DescrType::COMBINED_IMAGE_SAMPLER, info);
170 }
171 lock.unlock();
172 return VK_SUCCESS;
173
174 C_POOL: pools[pool_index].destroy(dev);
175 C_ARRAY: pools.remove_last();
176 lock.unlock();
177 return res;
178 }
179
180 void jen::DescriptorTextureAllocator::destroy(vkw::Device device, Set set) {
181 lock.lock();
182 {
183 set.pool.deallocate_sets(device, set.set);
184 uint_fast8_t i = 0;
185 for (; pools[i].pool != set.pool; ++i)
186 jassert(i < pools.count(), "descriptor texture set has incorrect pool");
187
188 --pools[i].consumed;
189 if (pools[i].consumed == 0) {
190 pools[i].destroy(device);
191 pools.remove(i);
192 }
193 }
194 lock.unlock();
195 }
196
197 [[nodiscard]] jen::Result jen::DescriptorImageView::
198 init(vkw::Device dev, vkw::DescrPool p, vkw::ImView v, vkw::Sampler s) {
199 Result res;
200 vkw::DescrBind binding(0, s.is_null() ? DESCR_TYPE : DESCR_TYPE_SAMPLER,
201 1, vkw::ShaderStage::FRAGMENT);
202 res = layout.init(dev, binding);
203 if (res != VK_SUCCESS)
204 return res;
205 res = p.allocate_set(dev, layout, &set);
206 if (res != VK_SUCCESS)
207 layout.destroy(dev);
208 else
209 update(dev, v, s);
210 return res;
211 }
212
213 void jen::DescriptorImageView::
214 update(vkw::Device d, vkw::ImView v, vkw::Sampler s) {
215 vkw::DescrImage i; {
216 i.sampler = s;
217 i.imageView = v;
218 i.imageLayout = vkw::ImLayout::SHADER_READ_ONLY;
219 } set.set(d, 0, s.is_null() ? DESCR_TYPE : DESCR_TYPE_SAMPLER, i);
220 }
221
222 void jen::DescriptorImageView::
223 destroy(vkw::Device device, vkw::DescrPool pool) {
224 pool.deallocate_sets(device, set);
225 layout.destroy(device);
226 }
File src/device/allocator/buffer.cpp deleted (index 55828d8..0000000)
1 #include "buffer.h"
2
3 #include <math/misc.h>
4
5 [[nodiscard]] vkw::Result jen::vk::DeviceBuffer::
6 init(vkw::Device device,
7 const vkw::DeviceMemProps &dmp,
8 vkw::DeviceSize size,
9 vkw::MemPropMask mem_props,
10 vkw::BufferUsageMask buf_usage,
11 bool map)
12 {
13 vkw::Result res;
14 res = buffer.init(device, vkw::Buffer::Mask(), size, buf_usage);
15 if (res != VK_SUCCESS)
16 return res;
17
18 auto rs = buffer.memoryRequirements(device);
19
20 rs.type_mask = vkw::filter_mem_types(dmp, rs.type_mask, mem_props);
21 if (rs.type_mask == 0)
22 return vkw::ERROR_DEVICE_MEMORY_TYPE_NOT_FOUND;
23
24 jl::array<bool, 16> heap_used = {};
25 for (uint32_t i = 0; i < VK_MAX_MEMORY_TYPES; ++i) {
26 if (rs.type_mask & (1<<i)) {
27 auto &heap_i = dmp.memoryTypes[i].heapIndex;
28 if (heap_used[heap_i])
29 continue;
30 heap_used[heap_i] = true;
31 res = memory.allocate(device, rs.size, i);
32 if (res == VK_ERROR_OUT_OF_DEVICE_MEMORY)
33 if (res != VK_SUCCESS)
34 goto DESTROY_BUFFER;
35
36 this->mem_props = dmp.memoryTypes[i].propertyFlags;
37 goto ALLOCATED;
38 }
39 }
40
41 res = vkw::ERROR_DEVICE_MEMORY_TYPE_NOT_FOUND;
42 goto DESTROY_BUFFER;
43
44 ALLOCATED:
45
46
47 res = buffer.bind_memory(device, memory, 0);
48 if (res != VK_SUCCESS)
49 goto FREE_MEMORY;
50
51 if (map and this->mem_props & vkw::MemProp::HOST_VISIBLE) {
52 res = memory.map(device, 0, rs.size, &p_mapped);
53 if (res != VK_SUCCESS)
54 goto FREE_MEMORY;
55 }
56 else p_mapped = nullptr;
57
58 return VK_SUCCESS;
59
60 FREE_MEMORY:
61 memory.deallocate(device);
62 DESTROY_BUFFER:
63 buffer.destroy(device);
64 return res;
65 }
66
67 void jen::vk::DeviceBuffer::destroy(vkw::Device device) {
68 if (p_mapped != nullptr)
69 memory.unmap(device);
70 memory.deallocate(device);
71 buffer.destroy(device);
72 }
73
74
75 [[nodiscard]] vkw::Result jen::vk::DeviceBufferAtlas::
76 init(vkw::Device device,
77 const vkw::DeviceMemProps &dmp,
78 vkw::DeviceSize size,
79 vkw::MemPropMask mem_props,
80 vkw::BufferUsageMask buf_usage,
81 bool map,
82 DeviceBufferPart *p_dst)
83 {
84 vkw::DeviceSize preferred_size = preferred_allocation_size();
85
86 vkw::DeviceSize allocation_size;
87 if (size < preferred_size)
88 allocation_size = preferred_size;
89 else
90 allocation_size = math::round_up(size, preferred_size);
91
92 vkw::Result res;
93 res = buffer.init(device, dmp, allocation_size, mem_props, buf_usage, map);
94 if (res != VK_SUCCESS)
95 return res;
96
97 if (not atlas.init(allocation_size, 8)) {
98 res = VK_ERROR_OUT_OF_HOST_MEMORY;
99 goto DESTROY_BUFFER;
100 }
101
102 atlas::Result ares;
103 ares = atlas.add(size, &p_dst->region);
104 jassert(ares == atlas::Result::SUCCESS, "atlas can't fail here");
105
106
107 p_dst->p_mapped = buffer.p_mapped;
108 p_dst->buffer = buffer.buffer;
109 p_dst->memory = buffer.memory;
110 p_dst->mem_props = buffer.mem_props;
111
112 return VK_SUCCESS;
113
114 DESTROY_BUFFER:
115 buffer.destroy(device);
116 return res;
117 }
118
119 void jen::vk::DeviceBufferAtlas::destroy(vkw::Device device) {
120 jassert_soft(is_empty(), "not empty while destroying\n");
121 atlas.destroy();
122 buffer.destroy(device);
123 }
File src/device/allocator/buffer.h deleted (index f916b66..0000000)
1 // created by Jackalope in 16.10.2018
2 #pragma once
3
4 #include <vkw/buffer.h>
5 #include <vkw/descriptor_set.h>
6 #include <atlas/atlas.h>
7
8 namespace jen::vk
9 {
10 struct DeviceBuffer
11 {
12
13 [[nodiscard]] vkw::Result
14 init(vkw::Device device,
15 const vkw::DeviceMemProps &dmp,
16 vkw::DeviceSize size,
17 vkw::MemPropMask mem_props,
18 vkw::BufferUsageMask buf_usage,
19 bool map);
20
21 void destroy(vkw::Device);
22
23 vkw::Buffer buffer;
24 vkw::Memory memory;
25 vkw::MemPropMask mem_props;
26 /// Memory mapped memory property flag bit HOST_VISIBLE set
27 uint8_t *p_mapped;
28 };
29
30 struct DeviceBufferAtlas;
31
32 struct DeviceBufferPart {
33 [[nodiscard]] vkw::DeviceSize size() const {
34 return region.size;
35 }
36 [[nodiscard]] vkw::DeviceSize offset() const {
37 return region.offset;
38 }
39 [[nodiscard]] vkw::DescrBuffer range() const {
40 return {buffer, offset(), size()};
41 }
42 [[nodiscard]] constexpr bool is_mapped() const {
43 return p_mapped;
44 }
45 [[nodiscard]] uint8_t* p_data() {
46 jassert(p_mapped != nullptr,
47 "trying to access non-mapped memory");
48 return reinterpret_cast<uint8_t*>(p_mapped) + offset();
49 }
50 [[nodiscard]] const uint8_t* p_data() const {
51 return const_cast<DeviceBufferPart*>(this)->p_data();
52 }
53 [[nodiscard]] constexpr bool is_flush_needed() const {
54 return not (mem_props & vkw::MemProp::HOST_COHERENT);
55 }
56
57 vkw::Buffer buffer;
58 vkw::Memory memory;
59 vkw::MemPropMask mem_props;
60
61 vkw::BufferUsageMask buffer_usage;
62 uint8_t mem_use_index;
63 protected:
64 friend DeviceBufferAtlas;
65 atlas::Atlas1D::Region region;
66 uint8_t *p_mapped;
67 };
68
69 struct DeviceBufferAtlas
70 {
71 constexpr static const vkw::DeviceSize MEGABYTE = 1024 * 1024;
72
73 [[nodiscard]] constexpr static vkw::DeviceSize
74 preferred_allocation_size() {
75 return MEGABYTE * 16;
76 }
77
78 [[nodiscard]] vkw::Result
79 init(vkw::Device device,
80 const vkw::DeviceMemProps &dms,
81 vkw::DeviceSize size,
82 vkw::MemPropMask mem_props,
83 vkw::BufferUsageMask buf_usage,
84 bool map,
85 DeviceBufferPart *p_dst);
86
87 void
88 destroy(vkw::Device device);
89
90 [[nodiscard]] bool is_empty() const { return atlas.is_full(); }
91
92 [[nodiscard]] vkw::Result
93 allocate(vkw::Device d, vkw::DeviceSize size, vkw::DeviceSize alignment,
94 bool map, DeviceBufferPart *p_dst)
95 {
96 auto ares = alignment == 0
97 ? atlas.add(size, &p_dst->region)
98 : atlas.add(size, alignment, &p_dst->region);
99 if (ares == atlas::Result::SUCCESS) {
100 if ((buffer.mem_props & vkw::MemProp::HOST_VISIBLE)
101 and map and buffer.p_mapped == nullptr) {
102 vkw::Result res;
103 res = buffer.memory.map(d, 0, atlas.size, &buffer.p_mapped);
104 if (res != VK_SUCCESS) {
105 atlas.remove(p_dst->region);
106 return res;
107 }
108 }
109 p_dst->p_mapped = buffer.p_mapped;
110 p_dst->buffer = buffer.buffer;
111 p_dst->memory = buffer.memory;
112 p_dst->mem_props = buffer.mem_props;
113 return VK_SUCCESS;
114 }
115 if (ares == atlas::Result::ALLOC_ERROR)
116 return VK_ERROR_OUT_OF_HOST_MEMORY;
117
118 jassert(ares == atlas::Result::NO_SIZE, "unexpected atlas result");
119 return VK_ERROR_OUT_OF_DEVICE_MEMORY;
120 }
121
122 void deallocate(const DeviceBufferPart &gba) {
123 atlas.remove(gba.region);
124 }
125
126 DeviceBuffer buffer;
127 atlas::Atlas1D atlas;
128 };
129 }
File src/device/allocator/buffer_allocator.cpp deleted (index 694e469..0000000)
1 #include "buffer_allocator.h"
2
3 [[nodiscard]] vkw::Result jen::vk::DeviceBufferAllocator::
4 allocate(vkw::DeviceSize size,
5 vkw::DeviceSize alignment,
6 DevMemUsage mem_usage,
7 vkw::BufferUsageMask buffer_usage_mask,
8 bool map_memory,
9 DeviceBufferPart *p_dst)
10 {
11 jassert(size > 0, "size cannot be 0");
12 vkw::Result res;
13
14 if (not mem_usage_supported[mem_usage])
15 mem_usage = STAGING_STATIC_DST;
16 FALLBACK:
17 auto &buffers_by_muse = buffers_by_mem_usage[mem_usage];
18 buffers_by_muse.lock.lock();
19 for (auto &buffers_by_use : buffers_by_muse.values) {
20 if (buffers_by_use.usage == buffer_usage_mask) {
21
22 for (auto &buffer : buffers_by_use.values) {
23 res = buffer.allocate(device, size, alignment, map_memory, p_dst);
24 if (res == VK_ERROR_OUT_OF_DEVICE_MEMORY)
25 continue;
26 p_dst->mem_use_index = mem_usage;
27 p_dst->buffer_usage = buffer_usage_mask;
28 goto RETURN;
29 }
30 }
31 }
32
33 if (not buffers_by_muse.values.insert_dummy()) {
34 res = VK_ERROR_OUT_OF_HOST_MEMORY;
35 goto RETURN;
36 }
37 {
38 auto &new_usage = buffers_by_muse.values.last();
39 new_usage.usage = buffer_usage_mask;
40 if (not new_usage.values.init(8)) {
41 new_usage.values.init();
42 res = VK_ERROR_OUT_OF_HOST_MEMORY;
43 goto RETURN;
44 }
45
46 {
47 new_usage.values.insert_dummy_no_resize_check();
48 auto &new_buffer = new_usage.values.last();
49 res = new_buffer.init(device, dmp, size, GPU_MEM_USAGE_PROPS[mem_usage],
50 buffer_usage_mask, map_memory, p_dst);
51 buffers_by_muse.lock.unlock();
52 if (res != VK_SUCCESS) {
53 new_usage.values.remove_last();
54 if (res == VK_ERROR_OUT_OF_DEVICE_MEMORY) {
55 if (mem_usage != STAGING_STATIC_DST) {
56 mem_usage = STAGING_STATIC_DST;
57 goto FALLBACK;
58 }
59 }
60 return res;
61 }
62 p_dst->mem_use_index = mem_usage;
63 p_dst->buffer_usage = buffer_usage_mask;
64 return res;
65 }
66 }
67 RETURN:
68 buffers_by_muse.lock.unlock();
69 return res;
70 }
71
72 void jen::vk::DeviceBufferAllocator::
73 deallocate(const DeviceBufferPart &bp)
74 {
75 jassert(bp.mem_use_index < GPU_MEM_USAGE_COUNT,
76 "corrupted or incorrect buffer allocation");
77 auto &buffers = buffers_by_mem_usage[bp.mem_use_index];
78 buffers.lock.lock();
79 {
80 for (auto &bu : buffers.values) {
81 if (bu.usage == bp.buffer_usage) {
82 for (auto &b : bu.values) {
83 if (b.buffer.buffer == bp.buffer) {
84 b.deallocate(bp);
85 buffers.lock.unlock();
86 return;
87 }
88 }
89 }
90 }
91 }
92 buffers.lock.unlock();
93 jassert_soft(false, "failed to find buffer while removing\n");
94 }
File src/device/allocator/buffer_allocator.h deleted (index abcef03..0000000)
1 #pragma once
2
3 #include "buffer.h"
4 #include <jlib/threads.h>
5
6 namespace jen::vk
7 {
8 /**
9 * @brief Memory usage types supported by allocator.
10 * Allocator user specifies memory specialization, allocator will select best
11 * available memory type. BufferPart contains used memory properties,
12 * so the user must check if allocated buffer requiries flushing, or
13 * can be used without staging (DEVICE_LOCAL and HOST_VISIBLE).
14 * Buffer can also be non-DEVICE_LOCAL even if STATIC requested,
15 * because the only available memory is non DEVICE_LOCAL.
16 * Type names are based on this recommendations:
17 * https://gpuopen.com/vulkan-device-memory/
18 */
19 enum DevMemUsage :uint8_t {
20 /**
21 * @brief DEVICE_LOCAL static data, fastest device memory.
22 * Definitely exists.
23 * STATIC is used for device local access, while others types
24 * in GpuMemUsage are used for host access.
25 */
26 STATIC,
27 /**
28 * @brief Dynamic data, optimal for transfer to device without staging.
29 * Only memory available on most integrated devices.
30 * Fast for device.
31 */
32 DYNAMIC_DST,
33 /**
34 * @brief Memory for staging transfer to device, slow for device.
35 * Defenetly exists, fallback to DYNAMIC_DST and STAGING_SRC.
36 */
37 STAGING_STATIC_DST,
38 /**
39 * @brief Staging memory, slow for device,
40 * optimal for tranfer from discrete device.
41 */
42 STAGING_SRC
43 };
44 constexpr static const uint8_t GPU_MEM_USAGE_COUNT = 4;
45
46 constexpr static const
47 jl::array<vkw::MemPropMask, GPU_MEM_USAGE_COUNT> GPU_MEM_USAGE_PROPS = {
48 vkw::MemProp::DEVICE_LOCAL,
49
50 vkw::MemProp::DEVICE_LOCAL |
51 vkw::MemProp::HOST_VISIBLE | vkw::MemProp::HOST_COHERENT,
52
53 vkw::MemProp::HOST_VISIBLE | vkw::MemProp::HOST_COHERENT,
54
55 vkw::MemProp::HOST_VISIBLE | vkw::MemProp::HOST_COHERENT |
56 vkw::MemProp::HOST_CACHED,
57 };
58
59 struct DeviceBufferAllocator
60 {
61 void init(vkw::Device device, const vkw::DeviceMemProps &devmemprops) {
62 this->device = device;
63 dmp = devmemprops;
64 mem_usage_supported = {};
65
66 for (uint32_t muse = 0; muse < GPU_MEM_USAGE_COUNT; ++muse) {
67 for (uint32_t i = 0; i < VK_MAX_MEMORY_TYPES; ++i) {
68 auto &mprops = GPU_MEM_USAGE_PROPS[muse];
69 auto filtered = dmp.memoryTypes[i].propertyFlags & mprops;
70 if (filtered == mprops) {
71 mem_usage_supported[muse] = true;
72 buffers_by_mem_usage[muse].init();
73 break;
74 }
75 }
76 }
77 jassert(mem_usage_supported[STATIC]
78 and mem_usage_supported[STAGING_STATIC_DST],
79 "vulkan specification guarantee");
80 }
81 void destroy() {
82 for (uint32_t muse = 0; muse < GPU_MEM_USAGE_COUNT; ++muse)
83 if (mem_usage_supported[muse])
84 buffers_by_mem_usage[muse].destroy(device);
85 }
86
87
88 [[nodiscard]] vkw::Result
89 allocate(vkw::DeviceSize size,
90 vkw::DeviceSize alignment,
91 DevMemUsage mem_usage,
92 vkw::BufferUsageMask buffer_usage_mask,
93 bool map_memory,
94 DeviceBufferPart *p_dst);
95
96 void deallocate(const DeviceBufferPart&);
97
98 struct BuffersUsage {
99 jl::darray<DeviceBufferAtlas> values;
100 vkw::BufferUsageMask usage;
101 };
102 struct BuffersMemUsage {
103 void init() {
104 values.init();
105 lock.init();
106 }
107 void destroy(vkw::Device d) {
108 for (auto &v : values) {
109 for (auto &vv : v.values) {
110 vv.destroy(d);
111 }
112 v.values.destroy();
113 }
114 values.destroy();
115 lock.destroy();
116 }
117
118 jl::darray<BuffersUsage> values;
119 jth::Mutex lock;
120 };
121
122 vkw::Device device;
123 vkw::DeviceMemProps dmp;
124
125 jl::array<BuffersMemUsage, GPU_MEM_USAGE_COUNT> buffers_by_mem_usage;
126 jl::array<bool, GPU_MEM_USAGE_COUNT> mem_usage_supported;
127
128 };
129 }
File src/device/allocator/memory.cpp deleted (index d2a7b48..0000000)
1 #include "memory.h"
2
3 #include <math/misc.h>
4
5 [[nodiscard]] vkw::Result jen::vk::DeviceMemory::
6 init(vkw::Device dev, const vkw::DeviceMemProps &dmp, vkw::DeviceSize part_size,
7 vkw::MemType mem_type, bool map, DeviceMemoryPart *p_dst)
8 {
9 vkw::DeviceSize heapsize;
10 heapsize = dmp.memoryHeaps[dmp.memoryTypes[mem_type].heapIndex].size;
11 vkw::DeviceSize preferred_size = preferred_allocation_size(heapsize);
12
13 vkw::DeviceSize allocation_size;
14 if (part_size < preferred_size)
15 allocation_size = preferred_size;
16 else
17 allocation_size = math::round_up(part_size, preferred_size);
18
19 // TODO reduce allocation size if out of device memory
20 vkw::Result res;
21 res = memory.allocate(dev, allocation_size, mem_type);
22 if (res != VK_SUCCESS)
23 return res;
24
25 if (not atlas.init(allocation_size, 4)) {
26 res = VK_ERROR_OUT_OF_HOST_MEMORY;
27 goto CANCEL;
28 }
29
30 if (map) {
31 res = map_memory(dev);
32 if (res != VK_SUCCESS)
33 goto CANCEL;
34 }
35 else p_mapped = nullptr;
36
37 p_dst->memory = memory;
38 p_dst->p_mapped = p_mapped;
39
40 atlas::Result ares;
41 ares = atlas.add(part_size, &p_dst->part);
42 jassert(ares == atlas::Result::SUCCESS, "atlas can't fail here");
43
44 return VK_SUCCESS;
45
46 CANCEL:
47 memory.deallocate(dev);
48 return res;
49 }
50
51 [[nodiscard]] vkw::Result jen::vk::DeviceMemory::
52 add(vkw::Device dev, vkw::DeviceSize size, vkw::DeviceSize alignment, bool map,
53 DeviceMemoryPart *p_dst)
54 {
55 auto ares = atlas.add(size, alignment, &p_dst->part);
56 if (ares == atlas::Result::SUCCESS) {
57 p_dst->memory = memory;
58 p_dst->p_mapped = p_mapped + p_dst->part.offset;
59
60 if (map) {
61 if (p_mapped == nullptr) {
62 vkw::Result res = map_memory(dev);
63 if (res != VK_SUCCESS) {
64 atlas.remove(p_dst->part);
65 return res;
66 }
67 }
68 p_dst->p_mapped = p_mapped + p_dst->part.offset;
69 }
70 else
71 p_dst->p_mapped = nullptr;
72 return VK_SUCCESS;
73 }
74 if (ares == atlas::Result::NO_SIZE)
75 return VK_ERROR_OUT_OF_DEVICE_MEMORY;
76
77 jassert(ares == atlas::Result::ALLOC_ERROR, "unexpected atlas result");
78 return VK_ERROR_OUT_OF_HOST_MEMORY;
79 }
File src/device/allocator/memory.h deleted (index 899b196..0000000)
1 #pragma once
2
3 #include <vkw/memory.h>
4 #include <atlas/atlas.h>
5
6 namespace jen::vk {
7 struct DeviceMemory;
8 struct DeviceMemoryPart {
9 vkw::Memory memory;
10 vkw::MemType type;
11 uint32_t allocator_index;
12 atlas::Atlas1D::Region part;
13 void *p_mapped;
14 };
15 }
16
17 struct jen::vk::DeviceMemory
18 {
19 constexpr static const vkw::DeviceSize MEGABYTE = 1024 * 1024;
20 constexpr static const uint32_t MAX_ALLOCATIONS_PER_TYPE = 64;
21
22 [[nodiscard]] constexpr static vkw::DeviceSize
23 preferred_allocation_size(vkw::DeviceSize heap_size) {
24 return math::round_up(heap_size / MAX_ALLOCATIONS_PER_TYPE, MEGABYTE);
25 }
26
27
28 [[nodiscard]] vkw::Result
29 init(vkw::Device, const vkw::DeviceMemProps &dmp, vkw::DeviceSize part_size,
30 vkw::MemType mem_type, bool map, DeviceMemoryPart *p_dst);
31
32 [[nodiscard]] vkw::Result map_memory(vkw::Device d) {
33 return memory.map(d, 0, atlas.size, &p_mapped);
34 }
35
36 void destroy(vkw::Device device) {
37 jassert_soft(is_empty(), "not clean while destroying\n");
38 atlas.destroy();
39 if (p_mapped != nullptr)
40 memory.unmap(device);
41 memory.deallocate(device);
42 atlas.size = 0;
43 }
44
45 [[nodiscard]] vkw::Result
46 add(vkw::Device, vkw::DeviceSize size, vkw::DeviceSize alignment, bool map,
47 DeviceMemoryPart *p_dst);
48
49 void remove(const DeviceMemoryPart& part) { atlas.remove(part.part); }
50
51 [[nodiscard]] bool is_empty() { return atlas.is_full(); }
52
53 vkw::Memory memory;
54 atlas::Atlas1D atlas;
55 uint8_t *p_mapped;
56 };
File src/device/allocator/memory_allocator.cpp deleted (index e2ca784..0000000)
1 #include "memory_allocator.h"
2
3 [[nodiscard]] vkw::Result jen::vk::DeviceMemoryAllocator::
4 allocate(const vkw::MemReqs &mrs,
5 bool map, DeviceMemoryPart *p_dst)
6 {
7 for (uint32_t mtype = 0; mtype < MAX_MEMORY_TYPES; ++mtype) {
8 uint32_t type_bit = 1 << mtype;
9 if (not (mrs.type_mask & type_bit))
10 continue;
11
12 vkw::Result res;
13
14 auto &type = mem_types[mtype];
15 type.lock.lock();
16
17 uint32_t first_nonallocated = uint32_t(-1);
18 for (uint32_t i = 0; i < type.values.count(); ++i) {
19 auto &m = type.values[i];
20 if (m.atlas.size == 0) {
21 if (first_nonallocated == uint32_t(-1))
22 first_nonallocated = i;
23 }
24 else if (not m.is_empty()) {
25 res = m.add(device, mrs.size, mrs.alignment, map, p_dst);
26 if (res == VK_ERROR_OUT_OF_DEVICE_MEMORY)
27 continue;
28 p_dst->allocator_index = i;
29 goto RETURN;
30 }
31 }
32
33 if (first_nonallocated == uint32_t(-1))
34 goto CONTINUE;
35
36 res = type.values[first_nonallocated].init(device, dmp, mrs.size,
37 mtype, map, p_dst);
38 p_dst->allocator_index = first_nonallocated;
39 if (res == VK_SUCCESS or res != VK_ERROR_OUT_OF_DEVICE_MEMORY)
40 goto RETURN;
41
42 CONTINUE:
43 type.lock.unlock();
44 continue;
45 RETURN:
46 if (res == VK_SUCCESS) {
47 p_dst->type = mtype;
48 }
49 type.lock.unlock();
50 return res;
51 }
52 return VK_ERROR_OUT_OF_DEVICE_MEMORY;
53 }
File src/device/allocator/memory_allocator.h deleted (index 602cf71..0000000)
1 #pragma once
2
3 #include "memory.h"
4 #include <jlib/threads.h>
5
6 namespace jen::vk { struct DeviceMemoryAllocator; }
7
8 struct jen::vk::DeviceMemoryAllocator
9 {
10 void init(vkw::Device d, const vkw::DeviceMemProps &dmp) {
11 for (auto &mt : mem_types) {
12 for (auto &m : mt.values)
13 m.atlas.size = 0;
14 mt.lock.init();
15 }
16 device = d;
17 this->dmp = dmp;
18 }
19 void destroy() {
20 for (auto &mt : mem_types) {
21 mt.lock.destroy();
22 for (auto &m : mt.values)
23 if (m.atlas.size != 0)
24 m.destroy(device);
25 }
26 }
27
28 [[nodiscard]] vkw::Result
29 allocate(const vkw::MemReqs&, bool map, DeviceMemoryPart *p_dst);
30
31 [[nodiscard]] vkw::Result map_memory(DeviceMemoryPart *p_part) {
32 auto &type = mem_types[p_part->type];
33 type.lock.lock();
34 auto &m = type.values[p_part->allocator_index];
35 vkw::Result res = VK_SUCCESS;
36 if (m.p_mapped == nullptr)
37 res = m.map_memory(device);
38 p_part->p_mapped = m.p_mapped;
39 type.lock.unlock();
40 return res;
41 }
42
43 void deallocate(const DeviceMemoryPart &part) {
44 auto &type = mem_types[part.type];
45 type.lock.lock();
46 auto &m = type.values[part.allocator_index];
47 m.remove(part);
48 if (m.is_empty())
49 m.destroy(device);
50 type.lock.unlock();
51 }
52
53 struct LockedMemoryArray {
54 jl::array<DeviceMemory, DeviceMemory::MAX_ALLOCATIONS_PER_TYPE> values;
55 jth::Mutex lock;
56 };
57
58
59 vkw::Device device;
60 vkw::DeviceMemProps dmp;
61
62 constexpr static const uint8_t MAX_MEMORY_TYPES = VK_MAX_MEMORY_TYPES;
63 jl::array<LockedMemoryArray, MAX_MEMORY_TYPES> mem_types;
64 };
File src/device/device.cpp changed (mode: 100644) (index 0a8fb29..c350d03)
1 1 #include "device.h" #include "device.h"
2 #include "../instance/instance.h"
2 3
3 4 static const char* DEVICE_EXTENSIONS[] = { static const char* DEVICE_EXTENSIONS[] = {
4 5 VK_KHR_SWAPCHAIN_EXTENSION_NAME, VK_KHR_SWAPCHAIN_EXTENSION_NAME,
 
... ... static const char* DEVICE_EXTENSIONS[] = {
6 7 }; };
7 8 constexpr static const uint8_t DEVICE_EXTENSION_COUNT = 2; constexpr static const uint8_t DEVICE_EXTENSION_COUNT = 2;
8 9
9 [[nodiscard]] jen::vk::Result
10 [[nodiscard]] jen::Result
10 11 select_queues(jen::Instance *p_inst, vkw::DevicePhysical dev, select_queues(jen::Instance *p_inst, vkw::DevicePhysical dev,
11 jen::vk::QueueIndices *p_qi)
12 jen::QueueIndices *p_qi)
12 13 { {
13 14 vkw::QueueFamiliesProperties properties; vkw::QueueFamiliesProperties properties;
14 15 uint32_t familyCount; uint32_t familyCount;
 
... ... select_queues(jen::Instance *p_inst, vkw::DevicePhysical dev,
33 34
34 35 enum FoundMode { NOT_FOUND, OK, BETTER, BEST }; enum FoundMode { NOT_FOUND, OK, BETTER, BEST };
35 36
36 jen::vk::QueueIndices &is = *p_qi;
37 jen::QueueIndices &is = *p_qi;
37 38
38 39 FoundMode present_found = NOT_FOUND; FoundMode present_found = NOT_FOUND;
39 40 FoundMode graphics_found = NOT_FOUND; FoundMode graphics_found = NOT_FOUND;
 
... ... select_queues(jen::Instance *p_inst, vkw::DevicePhysical dev,
126 127 } }
127 128
128 129
129 [[nodiscard]] jen::vk::Result
130 [[nodiscard]] jen::Result
130 131 isSuitable(vkw::DevicePhysical dev, vkw::Strings exts, vkw::Surface surf = {}) isSuitable(vkw::DevicePhysical dev, vkw::Strings exts, vkw::Surface surf = {})
131 132 { {
132 133 auto res = dev.check_extension_support(exts); auto res = dev.check_extension_support(exts);
 
... ... isSuitable(vkw::DevicePhysical dev, vkw::Strings exts, vkw::Surface surf = {})
148 149 return VK_SUCCESS; return VK_SUCCESS;
149 150 } }
150 151
151 [[nodiscard]] jen::vk::Result
152 [[nodiscard]] jen::Result
152 153 select_device_physical(jen::Instance *p_inst, vkw::Strings device_exts, select_device_physical(jen::Instance *p_inst, vkw::Strings device_exts,
153 154 vkw::DevicePhysical *p_dst) vkw::DevicePhysical *p_dst)
154 155 { {
155 156 vkw::DevicesPhysical devices; vkw::DevicesPhysical devices;
156 jen::vk::Result res;
157 jen::Result res;
157 158 res = vkw::get_devices(p_inst->instance, &devices); res = vkw::get_devices(p_inst->instance, &devices);
158 159 if (res != VK_SUCCESS) if (res != VK_SUCCESS)
159 160 return res; return res;
 
... ... DONE:
176 177 return res; return res;
177 178 } }
178 179
179 [[nodiscard]] jen::vk::Result jen::vk::Device::init(Instance *p_instance) {
180 [[nodiscard]] jen::Result jen::Device::init(Instance *p_instance) {
180 181 Result res; Result res;
181 182 vkw::Strings extensions; vkw::Strings extensions;
182 183 if (p_instance->modules_mask & ModulesFlag::GRAPHICS) if (p_instance->modules_mask & ModulesFlag::GRAPHICS)
 
... ... DONE:
263 264
264 265 vkGetPhysicalDeviceMemoryProperties(physical, &memory_properties); vkGetPhysicalDeviceMemoryProperties(physical, &memory_properties);
265 266
266 buffer_allocator.init(device, memory_properties);
267 memory_allocator.init(device, memory_properties);
267 buffer_allocator.p = nullptr;
268 memory_allocator.p = nullptr;
269 if (not buffer_allocator.init(device, memory_properties)
270 or
271 not memory_allocator.init(device, memory_properties)) {
272 destroy();
273 return VK_ERROR_OUT_OF_HOST_MEMORY;
274 }
268 275
269 276 return VK_SUCCESS; return VK_SUCCESS;
270 277 } }
271 278
272 void jen::vk::Device::destroy() {
273 buffer_allocator.destroy();
274 memory_allocator.destroy();
279 void jen::Device::destroy() {
280 if (buffer_allocator.p != nullptr)
281 buffer_allocator.destroy();
282 if (memory_allocator.p != nullptr)
283 memory_allocator.destroy();
275 284 for (uint32_t i = 0; i < unique_queue_count; ++i) for (uint32_t i = 0; i < unique_queue_count; ++i)
276 285 mutexes[i].destroy(); mutexes[i].destroy();
277 286 device.destroy(); device.destroy();
File src/device/device.h changed (mode: 100644) (index 70948f1..c9d5a0a)
1 1 #pragma once #pragma once
2
3 #define GLFW_INCLUDE_VULKAN
4 #include <GLFW/glfw3.h>
5
6 2 #include <jlib/threads.h> #include <jlib/threads.h>
7 3 #include <jlib/array.h> #include <jlib/array.h>
8 4 #include <vkw/device.h> #include <vkw/device.h>
9 5 #include <vkw/queue.h> #include <vkw/queue.h>
10 #include "allocator/buffer_allocator.h"
11 #include "allocator/memory_allocator.h"
12
13 #include "../instance/instance.h"
6 #include <jen/allocator/buffer.h>
7 #include <jen/allocator/memory.h>
8 #include <jen/result.h>
14 9
15 namespace jen::vk
10 namespace jen
16 11 { {
17 using vkw::Result;
12 struct Instance;
18 13
19 14 struct QueueI { struct QueueI {
20 15 vkw::QueueFamily family; vkw::QueueFamily family;
File src/framework.cpp changed (mode: 100644) (index 389672f..385c744)
1 1 #include <jen/framework.h> #include <jen/framework.h>
2 #include "instance/instance.h"
3 #include "device/device.h"
4 #if JEN_MODULE_GRAPHICS
5 #include "graphics/graphics.h"
6 #endif
7 #if JEN_MODULE_RESOURCE_MANAGER
8 #include "resource_manager/resource_manager.h"
9 #endif
2 10
3 11 [[nodiscard]] bool jen::Framework:: [[nodiscard]] bool jen::Framework::
4 12 init(ModulesMask modules_mask, const Settings &settings) { init(ModulesMask modules_mask, const Settings &settings) {
5 vk::Result res;
6 res = instance.init(modules_mask, settings.thread_pool, settings.window);
7 if (res != VK_SUCCESS)
13 p_instance = nullptr;
14 p_device = nullptr;
15 #if JEN_MODULE_GRAPHICS
16 graphics.p = nullptr;
17 #endif
18 #if JEN_MODULE_RESOURCE_MANAGER
19 resource_manager.p = nullptr;
20 #endif
21
22 Result res;
23 if (not jl::allocate(&p_instance)) {
24 res = VK_ERROR_OUT_OF_HOST_MEMORY;
8 25 goto C; goto C;
9 res = device.init(&instance);
10 if (res != VK_SUCCESS)
11 goto CI;
26 }
12 27
13 p_graphics = nullptr;
14 p_compute = nullptr;
28 res = p_instance->init(modules_mask, settings);
29 if (res != VK_SUCCESS) {
30 jl::deallocate(&p_instance);
31 goto C;
32 }
15 33
16 if (modules_mask & ModulesFlag::GRAPHICS) {
17 if (not jl::allocate(&p_graphics))
18 goto CD;
19 res = p_graphics->init(&instance, &device, settings.graphics);
20 if (res != VK_SUCCESS) {
21 jl::deallocate(&p_graphics);
22 goto CD;
23 }
34 if (not jl::allocate(&p_device)) {
35 res = VK_ERROR_OUT_OF_HOST_MEMORY;
36 goto C;
37 }
38 res = p_device->init(p_instance);
39 if (res != VK_SUCCESS) {
40 jl::deallocate(&p_device);
41 goto C;
24 42 } }
25 if (modules_mask & ModulesFlag::COMPUTE) {
26 if (not jl::allocate(&p_compute))
27 goto CG;
28 res = p_compute->init(&device);
43
44 #if JEN_MODULE_COMPUTE
45 compute.p_device = p_device;
46 #endif
47
48 #if JEN_MODULE_GRAPHICS
49 if (modules_mask & ModulesFlag::GRAPHICS) {
50 if (not jl::allocate(&graphics.p))
51 goto C;
52 res = graphics.p->init(p_instance, p_device, settings.graphics);
29 53 if (res != VK_SUCCESS) { if (res != VK_SUCCESS) {
30 jl::deallocate(&p_compute);
31 goto CG;
54 jl::deallocate(&graphics.p);
55 goto C;
32 56 } }
33 57 } }
34 return true;
58 #endif
35 59
36 CG:
37 if (p_graphics != nullptr) {
38 p_graphics->destroy();
39 jl::deallocate(&p_graphics);
60 #if JEN_MODULE_RESOURCE_MANAGER
61 if (modules_mask & ModulesFlag::RESOURCE_MANAGER) {
62 if (not jl::allocate(&resource_manager.p))
63 goto C;
64 resource_manager.p->init(graphics);
40 65 } }
41 CD:
42 device.destroy();
43 CI:
44 instance.destroy();
66 #endif
67 return true;
45 68 C: C:
69 this->destroy();
46 70 jassert_soft_release(false, vkw::to_string(res)); jassert_soft_release(false, vkw::to_string(res));
47 71 return false; return false;
48 72 } }
49 73
50 74 void jen::Framework::destroy() { void jen::Framework::destroy() {
51 if (p_compute != nullptr) {
52 p_compute->destroy();
53 jl::deallocate(&p_compute);
75 #if JEN_MODULE_RESOURCE_MANAGER
76 if (resource_manager.p != nullptr) {
77 resource_manager.p->destroy();
78 jl::deallocate(&resource_manager.p);
54 79 } }
55 if (p_graphics != nullptr) {
56 p_graphics->destroy();
57 jl::deallocate(&p_graphics);
80 #endif
81 #if JEN_MODULE_GRAPHICS
82 if (graphics.p != nullptr) {
83 graphics.p->destroy();
84 jl::deallocate(&graphics.p);
58 85 } }
59 device.destroy();
60 instance.destroy();
86 #endif
87 if (p_device != nullptr) {
88 p_device->destroy();
89 jl::deallocate(&p_device);
90 }
91 if (p_instance != nullptr) {
92 p_instance->destroy();
93 jl::deallocate(&p_instance);
94 }
95 }
96
97 [[nodiscard]] Window* jen::Framework::get_window() {
98 return &p_instance->window;
61 99 } }
File src/gpu_image.cpp added (mode: 100644) (index 0000000..a189ce0)
1 #include <jen/detail/gpu_image.h>
2 #include "device/device.h"
3
4 [[nodiscard]] jen::Result jen::detail::GpuImageExtraImage::
5 init_image(Device *p_dd, const GpuImageInfo &info)
6 {
7 vkw::ImInfo imageInfo; {
8 imageInfo.flags = info.flags;
9 imageInfo.type = info.type;
10 imageInfo.format = info.format;
11 imageInfo.extent = info.extent;
12 imageInfo.mipLevelCount = info.mip_level_count;
13 imageInfo.layerCount = info.layer_count;
14 imageInfo.sampleCount = info.samples;
15 imageInfo.tiling = info.tiling;
16 imageInfo.usageFlags = info.usage;
17 imageInfo.sharingMode = vkw::Sharing::EXCLUSIVE;
18 imageInfo.queueFamilyCount = 0;
19 imageInfo.p_queueFamilies = nullptr;
20 imageInfo.layout = vkw::ImLayout::UNDEFINED;
21 }
22 Result res = image.init(p_dd->device, imageInfo);
23 if (res != VK_SUCCESS)
24 return res;
25
26 vkw::MemPropMask mam = vkw::MemProp::DEVICE_LOCAL;
27 vkw::MemReqs memRs;
28 memRs = image.memoryRequirements(p_dd->device, p_dd->memory_properties, mam);
29 res = p_dd->memory_allocator.allocate(memRs, false, &memory);
30 if (res != VK_SUCCESS)
31 goto C_IMAGE;
32
33 res = image.bind_to_memory(p_dd->device, memory.memory, memory.part.offset);
34 if (res != VK_SUCCESS)
35 goto C_MEMORY;
36 return res;
37
38 C_MEMORY: p_dd->memory_allocator.deallocate(memory);
39 C_IMAGE: image.destroy(p_dd->device);
40 return res;
41 }
42
43 template<jen::GpuImageMode M>
44 [[nodiscard]] jen::Result jen::GpuImage<M>::
45 init( Device *p_dd,
46 const GpuImageInfo *p_ii,
47 const GpuImageViewInfo *p_vi,
48 const vkw::SamplerInfo *p_si,
49 const GpuImageDescrInfo *p_di) {
50 Result res = init_image(p_dd, *p_ii);
51 if (res != VK_SUCCESS)
52 return res;
53 auto d = p_dd->device;
54 res = this->init_view(d, *p_ii, image, *p_vi);
55 if (res != VK_SUCCESS)
56 goto DI;
57 res = this->init_sampler(d, *p_si);
58 if (res != VK_SUCCESS)
59 goto DV;
60 res = this->init_descr(d, *p_di, this->view, this->sampler);
61 if (res != VK_SUCCESS)
62 goto DS;
63 return res;
64
65 DS:
66 this->destroy_sampler(d);
67 DV:
68 this->destroy_view(d);
69 DI:
70 destroy_image(p_dd->device, p_dd->memory_allocator);
71 return res;
72 }
73
74 template<jen::GpuImageMode M>
75 void jen::GpuImage<M>::
76 destroy(Device *p_d, vkw::DescrPool pool) {
77 this->destroy_descr(*p_d,pool);
78 this->destroy_sampler(*p_d);
79 this->destroy_view(*p_d);
80 destroy_image(p_d->device, p_d->memory_allocator);
81 }
82
83 using namespace jen;
84
85 #define EXTERN_DEF(x) \
86 template Result GpuImage<GpuImageMode:: x >:: \
87 init( Device*, \
88 const GpuImageInfo*, \
89 const GpuImageViewInfo*, \
90 const vkw::SamplerInfo*, \
91 const GpuImageDescrInfo*); \
92 template void GpuImage<GpuImageMode:: x >:: \
93 destroy(Device*, vkw::DescrPool);
94
95 EXTERN_DEF(NONE)
96 EXTERN_DEF(VIEW)
97 EXTERN_DEF(SAMP)
98 EXTERN_DEF(DESCR)
99 EXTERN_DEF(SAMP_DESCR)
100
101 #undef EXTERN_DEF
File src/graphics/cmd_data.cpp changed (mode: 100644) (index 3afac0c..18dd30a)
1 1 #include "cmd_data.h" #include "cmd_data.h"
2 #include "../device/device.h"
2 3
3 [[nodiscard]] jen::vk::Result jen::vk::CmdData::PerFrame::
4 init(jen::vk::Device *p_dev, vkw::CmdPool pool_primary,
4 [[nodiscard]] jen::Result jen::vk::CmdData::PerFrame::
5 init(vkw::Device d, vkw::CmdPool pool_primary,
5 6 vkw::CmdPool pool_models, vkw::CmdPool pool_transfer) vkw::CmdPool pool_models, vkw::CmdPool pool_transfer)
6 7 { {
7 8 Result res; Result res;
8 res = cmd_primary.init(*p_dev, pool_primary);
9 res = cmd_primary.init(d, pool_primary);
9 10 if (res != VK_SUCCESS) if (res != VK_SUCCESS)
10 11 return res; return res;
11 12
12 res = cmd_models.init(*p_dev, pool_models, vkw::CmdBufferType::SECONDARY);
13 res = cmd_models.init(d, pool_models, vkw::CmdBufferType::SECONDARY);
13 14 if (res != VK_SUCCESS) if (res != VK_SUCCESS)
14 15 goto CP; goto CP;
15 16
16 res = cmd_transfer.init(*p_dev, pool_transfer);
17 res = cmd_transfer.init(d, pool_transfer);
17 18 if (res != VK_SUCCESS) if (res != VK_SUCCESS)
18 19 goto CS; goto CS;
19 res = syncs.init(p_dev->device);
20 res = syncs.init(d);
20 21 if (res != VK_SUCCESS) if (res != VK_SUCCESS)
21 22 goto CT; goto CT;
22 23
 
... ... init(jen::vk::Device *p_dev, vkw::CmdPool pool_primary,
32 33 return VK_SUCCESS; return VK_SUCCESS;
33 34
34 35 CT: CT:
35 cmd_transfer.destroy(*p_dev, pool_transfer);
36 cmd_transfer.destroy(d, pool_transfer);
36 37 CS: CS:
37 cmd_models.destroy(*p_dev, pool_models);
38 cmd_models.destroy(d, pool_models);
38 39 CP: CP:
39 cmd_primary.destroy(*p_dev, pool_primary);
40 cmd_primary.destroy(d, pool_primary);
40 41 return res; return res;
41 42 } }
42 43 void jen::vk::CmdData::PerFrame:: void jen::vk::CmdData::PerFrame::
43 destroy(jen::vk::Device *p_dev, vkw::CmdPool pool_primary,
44 destroy(vkw::Device d, vkw::CmdPool pool_primary,
44 45 vkw::CmdPool pool_models, vkw::CmdPool pool_transfer) { vkw::CmdPool pool_models, vkw::CmdPool pool_transfer) {
45 syncs.destroy(p_dev->device);
46 cmd_transfer.destroy(*p_dev, pool_transfer);
47 cmd_models.destroy(*p_dev, pool_models);
48 cmd_primary.destroy(*p_dev, pool_primary);
46 syncs.destroy(d);
47 cmd_transfer.destroy(d, pool_transfer);
48 cmd_models.destroy(d, pool_models);
49 cmd_primary.destroy(d, pool_primary);
49 50 } }
50 [[nodiscard]] jen::vk::Result jen::vk::CmdData::PerScImage::
51 init(Device *p_dev, vkw::CmdPool texts, vkw::CmdPool comp) {
51 [[nodiscard]] jen::Result jen::vk::CmdData::PerScImage::
52 init(vkw::Device d, vkw::CmdPool texts, vkw::CmdPool comp) {
52 53 Result res; Result res;
53 res = cmd_texts.init(*p_dev, texts, vkw::CmdBufferType::SECONDARY);
54 res = cmd_texts.init(d, texts, vkw::CmdBufferType::SECONDARY);
54 55 if (res != VK_SUCCESS) if (res != VK_SUCCESS)
55 56 return res; return res;
56 res = cmd_composition.init(*p_dev, comp, vkw::CmdBufferType::SECONDARY);
57 res = cmd_composition.init(d, comp, vkw::CmdBufferType::SECONDARY);
57 58 if (res != VK_SUCCESS) { if (res != VK_SUCCESS) {
58 cmd_texts.destroy(*p_dev, texts);
59 cmd_texts.destroy(d, texts);
59 60 return res; return res;
60 61 } }
61 62 status_texts = status_composition = {}; status_texts = status_composition = {};
62 63 return VK_SUCCESS; return VK_SUCCESS;
63 64 } }
64 65 void jen::vk::CmdData::PerScImage:: void jen::vk::CmdData::PerScImage::
65 destroy(Device *p_dev, vkw::CmdPool texts, vkw::CmdPool composition) {
66 cmd_composition.destroy(*p_dev, composition);
67 cmd_texts.destroy(*p_dev, texts);
66 destroy(vkw::Device d, vkw::CmdPool texts, vkw::CmdPool composition) {
67 cmd_composition.destroy(d, composition);
68 cmd_texts.destroy(d, texts);
68 69 } }
69 70
70 [[nodiscard]] jen::vk::Result jen::vk::CmdData::
71 [[nodiscard]] jen::Result jen::vk::CmdData::
71 72 init(Device *p_dev) { init(Device *p_dev) {
73 vkw::Device d = *p_dev;
72 74 Result res; Result res;
73 res = cmd_pool_transfer.init(*p_dev, p_dev->queue_indices.transfer.family,
75 res = cmd_pool_transfer.init(d, p_dev->queue_indices.transfer.family,
74 76 vkw::CmdPoolFlag::MANUAL_CMD_RESET); vkw::CmdPoolFlag::MANUAL_CMD_RESET);
75 77 if (res != VK_SUCCESS) if (res != VK_SUCCESS)
76 78 return res; return res;
77 79 for (uint64_t i = 0; i < cmd_pools_graphics.count32(); ++i) { for (uint64_t i = 0; i < cmd_pools_graphics.count32(); ++i) {
78 80 res = cmd_pools_graphics[i] res = cmd_pools_graphics[i]
79 .init(*p_dev, p_dev->queue_indices.graphics.family,
81 .init(d, p_dev->queue_indices.graphics.family,
80 82 vkw::CmdPoolFlag::MANUAL_CMD_RESET); vkw::CmdPoolFlag::MANUAL_CMD_RESET);
81 83 if (res != VK_SUCCESS) { if (res != VK_SUCCESS) {
82 84 while (i > 0) while (i > 0)
83 cmd_pools_graphics[--i].destroy(*p_dev);
85 cmd_pools_graphics[--i].destroy(d);
84 86 goto CPT; goto CPT;
85 87 } }
86 88 } }
87 89
88 res = per_frame.init(Result(), p_dev, cmd_pools_graphics[0],
90 res = per_frame.init(Result(), d, cmd_pools_graphics[0],
89 91 cmd_pools_graphics[1], cmd_pool_transfer); cmd_pools_graphics[1], cmd_pool_transfer);
90 92 if (res != VK_SUCCESS) if (res != VK_SUCCESS)
91 93 goto CPG; goto CPG;
92 94 per_sc_image.init(); per_sc_image.init();
93 res = async_transfer.init(p_dev->device, p_dev->queue_indices.transfer.family,
95 res = async_transfer.init(d, p_dev->queue_indices.transfer.family,
94 96 vkw::CmdPoolFlag::MANUAL_CMD_RESET); vkw::CmdPoolFlag::MANUAL_CMD_RESET);
95 97 if (res != VK_SUCCESS) if (res != VK_SUCCESS)
96 98 goto CPF; goto CPF;
97 res = async_syncs.init(p_dev->device);
99 res = async_syncs.init(d);
98 100 if (res != VK_SUCCESS) if (res != VK_SUCCESS)
99 101 goto CAT; goto CAT;
100 102
 
... ... init(Device *p_dev) {
106 108 return VK_SUCCESS; return VK_SUCCESS;
107 109
108 110
109 CAT:
110 async_transfer.destroy(p_dev->device);
111 CPF:
112 per_frame.destroy(p_dev, cmd_pools_graphics[0],
111 CAT:
112 async_transfer.destroy(d);
113 CPF:
114 per_frame.destroy(d, cmd_pools_graphics[0],
113 115 cmd_pools_graphics[1], cmd_pool_transfer); cmd_pools_graphics[1], cmd_pool_transfer);
114 CPG:
115 cmd_pools_graphics.destroy(*p_dev);
116 CPT:
117 cmd_pool_transfer.destroy(p_dev->device);
116 CPG:
117 cmd_pools_graphics.destroy(d);
118 CPT:
119 cmd_pool_transfer.destroy(d);
118 120 return res; return res;
119 121 } }
120 122
121 123 void jen::vk::CmdData::destroy(Device *p_dev) { void jen::vk::CmdData::destroy(Device *p_dev) {
122 async_syncs.destroy(p_dev->device);
123 async_transfer.destroy(p_dev->device);
124 vkw::Device d = *p_dev;
125 async_syncs.destroy(d);
126 async_transfer.destroy(d);
124 127 per_sc_image.destroy(&PerScImage::destroy, per_sc_image.destroy(&PerScImage::destroy,
125 p_dev, cmd_pools_graphics[2], cmd_pools_graphics[3]);
126 per_frame.destroy(p_dev, cmd_pools_graphics[0],
128 d, cmd_pools_graphics[2], cmd_pools_graphics[3]);
129 per_frame.destroy(d, cmd_pools_graphics[0],
127 130 cmd_pools_graphics[1], cmd_pool_transfer); cmd_pools_graphics[1], cmd_pool_transfer);
128 cmd_pools_graphics.destroy(*p_dev);
129 cmd_pool_transfer.destroy(p_dev->device);
131 cmd_pools_graphics.destroy(d);
132 cmd_pool_transfer.destroy(d);
130 133 } }
131 134
132 135 void jen::vk::CmdData::on_sc_recreate(Device *p_dev, uint32_t sc_im_count) { void jen::vk::CmdData::on_sc_recreate(Device *p_dev, uint32_t sc_im_count) {
 
... ... void jen::vk::CmdData::on_sc_recreate(Device *p_dev, uint32_t sc_im_count) {
135 138 f.on_sc_recreate(); f.on_sc_recreate();
136 139 while (per_sc_image.count32() > sc_im_count) { while (per_sc_image.count32() > sc_im_count) {
137 140 per_sc_image.last() per_sc_image.last()
138 .destroy(p_dev, cmd_pools_graphics[2], cmd_pools_graphics[3]);
141 .destroy(p_dev->device, cmd_pools_graphics[2], cmd_pools_graphics[3]);
139 142 per_sc_image.remove_last(); per_sc_image.remove_last();
140 143 } }
141 144 for (auto &i : per_sc_image) for (auto &i : per_sc_image)
142 145 i.on_sc_recreate(); i.on_sc_recreate();
143 146 } }
144 147
145 [[nodiscard]] jen::vk::Result jen::vk::CmdData::
146 prepare_per_image(jen::vk::Device *p_dev, uint32_t sc_im_count) {
148 [[nodiscard]] jen::Result jen::vk::CmdData::
149 prepare_per_image(jen::Device *p_dev, uint32_t sc_im_count) {
147 150 if (per_sc_image.count32() < sc_im_count) { if (per_sc_image.count32() < sc_im_count) {
148 151 if (not per_sc_image.reserve(sc_im_count - per_sc_image.count32())) if (not per_sc_image.reserve(sc_im_count - per_sc_image.count32()))
149 152 return VK_ERROR_OUT_OF_HOST_MEMORY; return VK_ERROR_OUT_OF_HOST_MEMORY;
 
... ... prepare_per_image(jen::vk::Device *p_dev, uint32_t sc_im_count) {
152 155 Result res; Result res;
153 156 per_sc_image.insert_dummy_no_resize_check(); per_sc_image.insert_dummy_no_resize_check();
154 157 res = per_sc_image.last() res = per_sc_image.last()
155 .init(p_dev, cmd_pools_graphics[2], cmd_pools_graphics[3]);
158 .init(p_dev->device, cmd_pools_graphics[2], cmd_pools_graphics[3]);
156 159 if (res != VK_SUCCESS) { if (res != VK_SUCCESS) {
157 160 per_sc_image.remove_last(); per_sc_image.remove_last();
158 161 return res; return res;
File src/graphics/cmd_data.h changed (mode: 100644) (index 21ed493..68a245b)
1 1 #pragma once #pragma once
2 #include <jen/detail/cmd_container.h>
3 #include <jlib/darray.h>
2 4
3 #include "../device/cmd_container.h"
4
5 namespace jen {
6 struct Device;
7 }
5 8 namespace jen::vk namespace jen::vk
6 9 { {
7 10 using Frame = uint32_t; using Frame = uint32_t;
 
... ... namespace jen::vk
90 93
91 94 struct PerFrame { struct PerFrame {
92 95 [[nodiscard]] Result [[nodiscard]] Result
93 init(Device*,
96 init(vkw::Device,
94 97 vkw::CmdPool primary, vkw::CmdPool models, vkw::CmdPool transfer); vkw::CmdPool primary, vkw::CmdPool models, vkw::CmdPool transfer);
95 98 void void
96 destroy(Device*,
99 destroy(vkw::Device,
97 100 vkw::CmdPool primary, vkw::CmdPool models, vkw::CmdPool transfer); vkw::CmdPool primary, vkw::CmdPool models, vkw::CmdPool transfer);
98 101
99 102 CmdContainer<2, 0> cmd_primary; CmdContainer<2, 0> cmd_primary;
 
... ... namespace jen::vk
145 148 }; };
146 149 struct PerScImage { struct PerScImage {
147 150 [[nodiscard]] Result [[nodiscard]] Result
148 init(Device*, vkw::CmdPool texts, vkw::CmdPool composition);
151 init(vkw::Device, vkw::CmdPool texts, vkw::CmdPool composition);
149 152 void void
150 destroy(Device*, vkw::CmdPool texts, vkw::CmdPool composition);
153 destroy(vkw::Device, vkw::CmdPool texts, vkw::CmdPool composition);
151 154
152 155 vkw::CmdBuffer cmd_texts; vkw::CmdBuffer cmd_texts;
153 156 vkw::CmdBuffer cmd_composition; vkw::CmdBuffer cmd_composition;
File src/graphics/debug_overlay.cpp changed (mode: 100644) (index 7c3e85c..06cdce9)
1 1 #include "debug_overlay.h" #include "debug_overlay.h"
2 #include "graphics.h"
3 2
4 3 // TODO use charconv // TODO use charconv
5 4 template<typename floating_t> template<typename floating_t>
 
... ... uint16_t fill_fps(DebugOverlay *p_do, uint32_t *p_color = nullptr)
101 100
102 101
103 102 [[nodiscard]] Result [[nodiscard]] Result
104 DebugOverlay::init(ModuleGraphics *p_mg, const char *font_path) {
105 if (not p_mg->create(font_path, &p_font))
103 DebugOverlay::init(ModuleGraphics mg, const char *font_path) {
104 if (not mg.create(font_path, &p_font))
106 105 return vkw::Error::ERROR_FILE_OPENING; return vkw::Error::ERROR_FILE_OPENING;
107 106 for (auto &p : p_texts) for (auto &p : p_texts)
108 107 p = p->new_(); p = p->new_();
109 108 return VK_SUCCESS; return VK_SUCCESS;
110 109 } }
111 [[nodiscard]] vk::Result
112 DebugOverlay::update(ModuleGraphics *p_mg, jl::time elapsed) {
110 [[nodiscard]] Result
111 DebugOverlay::update(ModuleGraphics mg, jl::time elapsed) {
113 112 uint32_t color; uint32_t color;
114 113 color = 0xFFFFFFFF; color = 0xFFFFFFFF;
115 114
116 115 uint16_t pixel_size = 30; uint16_t pixel_size = 30;
117 116
118 jen::Text::Position data;
117 TextPosition data;
119 118 data.offset = { 0, -5 }; data.offset = { 0, -5 };
120 119 data.text_offset_mode = data.screen_offset_mode data.text_offset_mode = data.screen_offset_mode
121 = { jen::Text::OffsetMode::X::LEFT,
122 jen::Text::OffsetMode::Y::BOTTOM };
120 = {TextOffsetMode::X::LEFT, TextOffsetMode::Y::BOTTOM};
123 121 data.offset = {}; data.offset = {};
124 122
125 123 period_elapsed += elapsed; period_elapsed += elapsed;
126 124 ++frames_per_period; ++frames_per_period;
127 125 if (period_elapsed.s > 0 or p_texts[0] == nullptr) if (period_elapsed.s > 0 or p_texts[0] == nullptr)
128 126 { {
129 vk::Result res;
130 res = p_mg->text_update(jen::Text::Layout::LEFT, pixel_size,
131 {buffer, fill_fps(this)}, {&color, 1},
132 p_font, &p_texts[0]);
127 Result res;
128 res = mg.text_update(TextLayout::LEFT, pixel_size, {buffer, fill_fps(this)},
129 color, p_font, &p_texts[0]);
133 130 if (res != VK_SUCCESS) if (res != VK_SUCCESS)
134 131 return res; return res;
135 132
136 133 data.text_offset_mode = data.screen_offset_mode data.text_offset_mode = data.screen_offset_mode
137 = { jen::Text::OffsetMode::X::LEFT,
138 jen::Text::OffsetMode::Y::TOP };
134 = { TextOffsetMode::X::LEFT, TextOffsetMode::Y::TOP };
139 135 p_texts[0]->pos = data; p_texts[0]->pos = data;
140 136
141 137 period_elapsed = {}; period_elapsed = {};
 
... ... DebugOverlay::update(ModuleGraphics *p_mg, jl::time elapsed) {
143 139 } }
144 140 return VK_SUCCESS; return VK_SUCCESS;
145 141 } }
146 void DebugOverlay::destroy(ModuleGraphics *p_mg) {
142 void DebugOverlay::destroy(ModuleGraphics mg) {
147 143 for (auto &p : p_texts) for (auto &p : p_texts)
148 p_mg->destroy(p);
144 mg.destroy(p);
149 145
150 p_mg->destroy(p_font);
146 mg.destroy(p_font);
151 147 } }
152 void DebugOverlay::disable(ModuleGraphics *p_mg) {
148 void DebugOverlay::disable(ModuleGraphics mg) {
153 149 for (auto &p : p_texts) for (auto &p : p_texts)
154 p_mg->destroy(p), p = p->new_();
150 mg.destroy(p), p = p->new_();
155 151 } }
File src/graphics/debug_overlay.h changed (mode: 100644) (index b2010ea..74a9040)
1 1 #pragma once #pragma once
2
3 2 #include <jlib/time.h> #include <jlib/time.h>
4 3 #include "draw_data/text_data/glyphs.h" #include "draw_data/text_data/glyphs.h"
4 #include <jen/graphics.h>
5 5
6 6 namespace jen namespace jen
7 7 { {
8 struct ModuleGraphics;
9
10
11 8 struct DebugOverlay struct DebugOverlay
12 9 { {
13 10 constexpr static const uint32_t BUFFER_SIZE = 1000; constexpr static const uint32_t BUFFER_SIZE = 1000;
14 11 constexpr static const uint32_t ELAPSED_BUFFER_SIZE = 25; constexpr static const uint32_t ELAPSED_BUFFER_SIZE = 25;
15 12
16 13
17 [[nodiscard]] Result init(ModuleGraphics *p_mg, const char *font_path);
18 void destroy(ModuleGraphics *p_mg);
14 [[nodiscard]] Result init(ModuleGraphics mg, const char *font_path);
15 void destroy(ModuleGraphics mg);
19 16
20 [[nodiscard]] Result update(ModuleGraphics *p_mg, jl::time elapsed);
21 void disable(ModuleGraphics *p_mg);
17 [[nodiscard]] Result update(ModuleGraphics mg, jl::time elapsed);
18 void disable(ModuleGraphics mg);
22 19
23 20 GlyphManager *p_font; GlyphManager *p_font;
24 Text *p_texts[1];
21 GpuText *p_texts[1];
25 22
26 23 jl::time period_elapsed = {}; jl::time period_elapsed = {};
27 24 uint64_t frames_per_period = 0; uint64_t frames_per_period = 0;
File src/graphics/draw_data/draw_data.cpp changed (mode: 100644) (index 4910e17..2ebb177)
1 1 #include "draw_data.h" #include "draw_data.h"
2 2 #include <math/geometry.h> #include <math/geometry.h>
3 3
4 [[nodiscard]] jen::vk::Result jen::vk::DrawData::init() {
4 [[nodiscard]] jen::Result jen::vk::DrawData::init() {
5 5 camera = {}; camera = {};
6 6 frustum.set_fov_x(math::pi<float>/2); frustum.set_fov_x(math::pi<float>/2);
7 7 frustum.set_aspect(1,1); frustum.set_aspect(1,1);
File src/graphics/draw_data/draw_data.h changed (mode: 100644) (index 450351c..95b5611)
1 1 #pragma once #pragma once
2 2
3 #include "camera.h"
3 #include <jen/camera.h>
4 4 #include "destroyer.h" #include "destroyer.h"
5 5 #include "../draw_stages/draw_stages.h" #include "../draw_stages/draw_stages.h"
6 6 #include "text_data/text_data.h" #include "text_data/text_data.h"
7 #include "../model.h"
8 7
9 namespace jen
10 {
11 using Lights = jl::rarray<const Light>;
12 struct LightsDraw {
13 Lights lights;
14 bool is_updated;
15 };
16 }
17 8 namespace jen::vk { struct DrawData; } namespace jen::vk { struct DrawData; }
18 9 struct jen::vk::DrawData struct jen::vk::DrawData
19 10 { {
File src/graphics/draw_data/text_data/atlas_buffer.cpp changed (mode: 100644) (index 23ac4b9..e635233)
1 1 #include "atlas_buffer.h" #include "atlas_buffer.h"
2 2
3 [[nodiscard]] jen::vk::Result jen::vk::AtlasBuffer::
4 init(DeviceBufferAllocator *p_a, vkw::Extent2D extent)
3 [[nodiscard]] jen::Result jen::vk::AtlasBuffer::
4 init(DeviceBufferAllocator a, vkw::Extent2D extent)
5 5 { {
6 this->extent = extent;
6 this->extent = extent;
7 7
8 Result res;
8 Result res;
9 9
10 vkw::DeviceSize buffer_size = extent.width * extent.height;
11 uint32_t use = vkw::BufferUsage::TRANSFER_DST
12 | vkw::BufferUsage::TRANSFER_SRC;
13 res = p_a->allocate(buffer_size, 0, DevMemUsage::STAGING_STATIC_DST, use,
14 true, &allocation);
15 if (res != VK_SUCCESS)
16 return res;
10 vkw::DeviceSize buffer_size = extent.width * extent.height;
11 uint32_t use = vkw::BufferUsage::TRANSFER_DST
12 | vkw::BufferUsage::TRANSFER_SRC;
13 res = a.allocate(buffer_size, 0, DevMemUsage::STAGING_STATIC_DST, use,
14 true, &allocation);
15 if (res != VK_SUCCESS)
16 return res;
17 17
18 if (not map.init({extent.width, extent.height}, 20)) {
19 res = VK_ERROR_OUT_OF_HOST_MEMORY;
20 p_a->deallocate(allocation);
21 }
22 return res;
18 if (not map.init({extent.width, extent.height}, 20)) {
19 res = VK_ERROR_OUT_OF_HOST_MEMORY;
20 a.deallocate(allocation);
21 }
22 return res;
23 23 } }
24 24
25 void jen::vk::AtlasBuffer::destroy(DeviceBufferAllocator *p_a) {
26 map.destroy();
27 p_a->deallocate(allocation);
25 void jen::vk::AtlasBuffer::destroy(DeviceBufferAllocator a) {
26 map.destroy();
27 a.deallocate(allocation);
28 28 } }
29 29
30 30 void jen::vk::AtlasBuffer:: void jen::vk::AtlasBuffer::
31 31 write_glyph(const math::v2u32 &offset, free_type::Glyph glyph) write_glyph(const math::v2u32 &offset, free_type::Glyph glyph)
32 32 { {
33 const auto &w= glyph.width();
34 const auto &h= glyph.height();
35 const auto &bitmap = glyph.bitmap();
33 const auto &w= glyph.width();
34 const auto &h= glyph.height();
35 const auto &bitmap = glyph.bitmap();
36 36
37 auto max = offset + math::v2u32{w, h};
38 jassert(max.x < extent.width && max.y < extent.height, "out of bounds");
37 auto max = offset + math::v2u32{w, h};
38 jassert(max.x < extent.width && max.y < extent.height, "out of bounds");
39 39
40 uint8_t *p_memory = allocation.p_data();
40 uint8_t *p_memory = allocation.p_data();
41 41
42 for (vkw::DeviceSize x = 0; x < w; ++x)
43 for (vkw::DeviceSize y = 0; y < h; ++y)
44 p_memory[x + offset.x + (y + offset.y) * extent.width] = bitmap[x + y * w];
42 for (vkw::DeviceSize x = 0; x < w; ++x)
43 for (vkw::DeviceSize y = 0; y < h; ++y)
44 p_memory[x + offset.x + (y + offset.y) * extent.width] = bitmap[x + y * w];
45 45 } }
46 46
47 47 [[nodiscard]] bool jen::vk::AtlasBuffer:: [[nodiscard]] bool jen::vk::AtlasBuffer::
48 48 cmp_glyph_debug(const math::v2u32 &offset, free_type::Glyph glyph) cmp_glyph_debug(const math::v2u32 &offset, free_type::Glyph glyph)
49 49 { {
50 const auto &width = glyph.width();
51 const auto &height = glyph.height();
52 const auto &bitmap = glyph.bitmap();
50 const auto &width = glyph.width();
51 const auto &height = glyph.height();
52 const auto &bitmap = glyph.bitmap();
53 53
54 auto max = offset + math::v2u32{width, height};
55 jassert(max.x < extent.width && max.y < extent.height, "out of bounds");
54 auto max = offset + math::v2u32{width, height};
55 jassert(max.x < extent.width && max.y < extent.height, "out of bounds");
56 56
57 uint8_t *p_memory = allocation.p_data();
57 uint8_t *p_memory = allocation.p_data();
58 58
59 for (vkw::DeviceSize x = 0; x < width; ++x)
60 for (vkw::DeviceSize y = 0; y < height; ++y)
61 if (p_memory[x + offset.x + (y + offset.y) * extent.width]
62 != bitmap[x + y * width])
63 return false;
64 return true;
59 for (vkw::DeviceSize x = 0; x < width; ++x)
60 for (vkw::DeviceSize y = 0; y < height; ++y)
61 if (p_memory[x + offset.x + (y + offset.y) * extent.width]
62 != bitmap[x + y * width])
63 return false;
64 return true;
65 65 } }
File src/graphics/draw_data/text_data/atlas_buffer.h changed (mode: 100644) (index ae879dd..096376c)
4 4
5 5 namespace jen::vk namespace jen::vk
6 6 { {
7 struct AtlasBuffer
8 {
9 DeviceBufferPart allocation;
10 vkw::Extent2D extent;
11 atlas::Atlas2D map;
7 struct AtlasBuffer
8 {
9 [[nodiscard]] Result
10 init(DeviceBufferAllocator a, vkw::Extent2D extent);
11 void
12 destroy(DeviceBufferAllocator a);
12 13
13 [[nodiscard]] Result
14 init(DeviceBufferAllocator *p_a, vkw::Extent2D extent);
15 void destroy(DeviceBufferAllocator *p_a);
14 void
15 write_glyph(const math::v2u32 &offset, free_type::Glyph glyph);
16 [[nodiscard]] bool
17 cmp_glyph_debug(const math::v2u32 &offset, free_type::Glyph glyph);
16 18
17 void
18 write_glyph(const math::v2u32 &offset, free_type::Glyph glyph);
19 [[nodiscard]] bool
20 cmp_glyph_debug(const math::v2u32 &offset, free_type::Glyph glyph);
21 };
19 DeviceBufferPart allocation;
20 vkw::Extent2D extent;
21 atlas::Atlas2D map;
22 };
22 23 } }
File src/graphics/draw_data/text_data/glyphs.cpp changed (mode: 100644) (index 30bd744..faa82d9)
... ... search(Glyph::Hash hash, Glyph::Id *p_index) const {
51 51 *p_index = i; *p_index = i;
52 52 return false; return false;
53 53 } }
54 [[nodiscard]] jen::vk::Result jen::GlyphManager::
54 [[nodiscard]] jen::Result jen::GlyphManager::
55 55 change_pixel_size(uint16_t new_pixel_size) { change_pixel_size(uint16_t new_pixel_size) {
56 56 if (new_pixel_size < 1) if (new_pixel_size < 1)
57 57 return vkw::Error::ERROR_INVALID_USAGE; return vkw::Error::ERROR_INVALID_USAGE;
 
... ... change_pixel_size(uint16_t new_pixel_size) {
62 62 } }
63 63 return VK_SUCCESS; return VK_SUCCESS;
64 64 } }
65 [[nodiscard]] jen::vk::Result jen::GlyphManager::
66 text_update(Text::Layout layout, uint16_t pixel_size_, Text::Chars chars,
67 Text::Colors_RGBA colors, vk::Frame frame_index, Text **pp_text)
65 [[nodiscard]] jen::Result jen::GlyphManager::
66 text_update(TextLayout layout, uint16_t pixel_size_, Chars chars,
67 Colors_RGBA colors, vk::Frame frame_index, GpuText **pp_text)
68 68 { {
69 69 Result res; Result res;
70 70 res = change_pixel_size(pixel_size_); res = change_pixel_size(pixel_size_);
 
... ... text_update(Text::Layout layout, uint16_t pixel_size_, Text::Chars chars,
74 74 if (not p_parent->texts.insert_dummy()) if (not p_parent->texts.insert_dummy())
75 75 return VK_ERROR_OUT_OF_HOST_MEMORY; return VK_ERROR_OUT_OF_HOST_MEMORY;
76 76
77 Text text;
77 GpuText text;
78 78 text.size = pixel_size; text.size = pixel_size;
79 79 text.p_parent = this; text.p_parent = this;
80 80
81 Text *p_new;
81 GpuText *p_new;
82 82 res = create_buffer(chars, colors, layout, &text, &p_new); res = create_buffer(chars, colors, layout, &text, &p_new);
83 83 if (res != VK_SUCCESS) { if (res != VK_SUCCESS) {
84 84 p_parent->texts.remove_last(); p_parent->texts.remove_last();
 
... ... write_quad(uint8_t *p_ver, uint8_t *p_ind,
123 123 memcpy(p_ver + quad_i * GM::BOX_VER_SIZE, vecs, GM::BOX_VER_SIZE); memcpy(p_ver + quad_i * GM::BOX_VER_SIZE, vecs, GM::BOX_VER_SIZE);
124 124 memcpy(p_ind + quad_i * GM::BOX_IND_SIZE, indices, GM::BOX_IND_SIZE); memcpy(p_ind + quad_i * GM::BOX_IND_SIZE, indices, GM::BOX_IND_SIZE);
125 125 } }
126 using Glyph = jen::GlyphManager::Glyph;
127 using Text = jen::Text;
126 using namespace jen;
127 using Glyph = GlyphManager::Glyph;
128
128 129 void fill_buffer( void fill_buffer(
129 Glyph glyphs_tmp[],
130 Glyph::Id ids[],
131 uint16_t g_offset,
132 uint16_t g_count,
133 uint16_t g_total_count,
134 Text::Colors_RGBA colors,
135 math::v2i32 occupied,
136 Text::Layout layout,
137 uint16_t *p_quad_i,
138 Text *p_text)
130 Glyph glyphs_tmp[],
131 Glyph::Id ids[],
132 uint16_t g_offset,
133 uint16_t g_count,
134 uint16_t g_total_count,
135 Colors_RGBA colors,
136 math::v2i32 occupied,
137 TextLayout layout,
138 uint16_t *p_quad_i,
139 GpuText *p_text)
139 140 { {
140 141 using GM = jen::GlyphManager; using GM = jen::GlyphManager;
141 142 math::v2i32 offset; math::v2i32 offset;
142 143 switch (layout) switch (layout)
143 144 { {
144 case Text::Layout::LEFT:
145 case TextLayout::LEFT:
145 146 offset = { occupied.x / -2, p_text->size }; offset = { occupied.x / -2, p_text->size };
146 147 break; break;
147 case Text::Layout::RIGHT: {
148 case TextLayout::RIGHT: {
148 149 offset = { occupied.x / 2, occupied.y }; offset = { occupied.x / 2, occupied.y };
149 150 auto last_glyph_advance = glyphs_tmp[ids[(g_count - 1) - g_offset]].advance; auto last_glyph_advance = glyphs_tmp[ids[(g_count - 1) - g_offset]].advance;
150 151 offset.x -= last_glyph_advance; offset.x -= last_glyph_advance;
151 152 } break; } break;
152 case Text::Layout::CENTER:
153 case TextLayout::CENTER:
153 154 offset = { occupied.x / -2, occupied.y - p_text->size / 2 }; offset = { occupied.x / -2, occupied.y - p_text->size / 2 };
154 155 break; break;
155 156 } }
 
... ... void fill_buffer(
159 160 for (uint16_t i = g_offset; i < g_offset + g_count; ++i) { for (uint16_t i = g_offset; i < g_offset + g_count; ++i) {
160 161 uint16_t g_i; uint16_t g_i;
161 162 switch (layout) { switch (layout) {
162 case Text::Layout::LEFT : g_i = i; break;
163 case Text::Layout::RIGHT : g_i = (g_count - 1) - i; break;
164 case Text::Layout::CENTER: g_i = i; break;
163 case TextLayout::LEFT : g_i = i; break;
164 case TextLayout::RIGHT : g_i = (g_count - 1) - i; break;
165 case TextLayout::CENTER: g_i = i; break;
165 166 } }
166 167 if (ids[g_i] == UINT32_MAX) { if (ids[g_i] == UINT32_MAX) {
167 168 switch (layout) { switch (layout) {
168 case Text::Layout::LEFT:
169 case TextLayout::LEFT:
169 170 offset = { occupied.x / -2, offset.y + p_text->size }; break; offset = { occupied.x / -2, offset.y + p_text->size }; break;
170 case Text::Layout::RIGHT:
171 case TextLayout::RIGHT:
171 172 offset = { occupied.x / 2, offset.y - p_text->size }; break; offset = { occupied.x / 2, offset.y - p_text->size }; break;
172 case Text::Layout::CENTER: break;
173 case TextLayout::CENTER: break;
173 174 } }
174 175 continue; continue;
175 176 } }
 
... ... void fill_buffer(
184 185 ++(*p_quad_i); ++(*p_quad_i);
185 186 } }
186 187 switch (layout) { switch (layout) {
187 case Text::Layout::LEFT:
188 case TextLayout::LEFT:
188 189 offset.x += glyph.advance; offset.x += glyph.advance;
189 190 break; break;
190 case Text::Layout::RIGHT: {
191 case TextLayout::RIGHT: {
191 192 if (g_i > 0) { if (g_i > 0) {
192 193 auto id = ids[g_i-1]; auto id = ids[g_i-1];
193 194 if (id != UINT32_MAX) if (id != UINT32_MAX)
 
... ... void fill_buffer(
195 196 } }
196 197 break; break;
197 198 } }
198 case Text::Layout::CENTER:
199 case TextLayout::CENTER:
199 200 offset.x += glyph.advance; offset.x += glyph.advance;
200 201 break; break;
201 202 } }
202 203 } }
203 204 } }
204 205
205 [[nodiscard]] jen::vk::Result jen::GlyphManager::
206 create_buffer(Text::Chars chars, Text::Colors_RGBA colors, Text::Layout layout,
207 Text *p_text, Text **pp_dst)
206 [[nodiscard]] jen::Result jen::GlyphManager::
207 create_buffer(Chars chars, Colors_RGBA colors, TextLayout layout,
208 GpuText *p_text, GpuText **pp_dst)
208 209 { {
209 210 Result res; Result res;
210 211 uint32_t buffer_usage = vkw::BufferUsage::TRANSFER_DST uint32_t buffer_usage = vkw::BufferUsage::TRANSFER_DST
211 212 | vkw::BufferUsage::VERTEX | vkw::BufferUsage::INDEX; | vkw::BufferUsage::VERTEX | vkw::BufferUsage::INDEX;
212 res = p_parent->p_allocator->
213 allocate(BOX_SIZE * chars.count(), 0, vk::DevMemUsage::DYNAMIC_DST,
213 res = p_parent->allocator.
214 allocate(BOX_SIZE * chars.count(), 0, DevMemUsage::DYNAMIC_DST,
214 215 buffer_usage, true, &p_text->buffer); buffer_usage, true, &p_text->buffer);
215 216 if (res != VK_SUCCESS) if (res != VK_SUCCESS)
216 217 return res; return res;
 
... ... create_buffer(Text::Chars chars, Text::Colors_RGBA colors, Text::Layout layout,
239 240
240 241 ids_tmp[i] = UINT32_MAX; ids_tmp[i] = UINT32_MAX;
241 242
242 if (layout == Text::Layout::CENTER)
243 if (layout == TextLayout::CENTER)
243 244 fill_buffer(glyphs_tmp, ids_tmp, i - line_length, line_length, fill_buffer(glyphs_tmp, ids_tmp, i - line_length, line_length,
244 245 uint16_t(chars.count()), uint16_t(chars.count()),
245 246 colors, occupied, layout, &quad_i, p_text); colors, occupied, layout, &quad_i, p_text);
 
... ... CONTINUE: continue;
268 269 occupied.x += glyphs_tmp[id_count].advance; occupied.x += glyphs_tmp[id_count].advance;
269 270 unique_ids_tmp[id_count++] = hash.id; unique_ids_tmp[id_count++] = hash.id;
270 271 } }
271 if (layout != Text::Layout::CENTER) {
272 if (layout != TextLayout::CENTER) {
272 273 if (max_width > occupied.x) if (max_width > occupied.x)
273 274 occupied.x = max_width; occupied.x = max_width;
274 275 occupied.y += p_text->size; occupied.y += p_text->size;
 
... ... CONTINUE: continue;
288 289 } }
289 290
290 291 size_t all_size; size_t all_size;
291 all_size = sizeof(Text) + (id_count-1) * sizeof(Glyph::Id);
292 all_size = sizeof(GpuText) + (id_count-1) * sizeof(Glyph::Id);
292 293 if (jl::allocate_bytes(pp_dst, all_size)) if (jl::allocate_bytes(pp_dst, all_size))
293 294 { {
294 295 size_t vertices_offset = p_text->buffer.offset(); size_t vertices_offset = p_text->buffer.offset();
295 296 size_t indices_offset = vertices_offset + BOX_VER_SIZE * chars.count(); size_t indices_offset = vertices_offset + BOX_VER_SIZE * chars.count();
296 297 p_text->fill(occupied, vertices_offset, indices_offset, 6 * quad_i); p_text->fill(occupied, vertices_offset, indices_offset, 6 * quad_i);
297 298
298 memcpy(*pp_dst,p_text, sizeof(Text));
299 memcpy(*pp_dst,p_text, sizeof(GpuText));
299 300 auto p_unique_ids = reinterpret_cast<uint32_t*>(&(*pp_dst)[1]) - 1; auto p_unique_ids = reinterpret_cast<uint32_t*>(&(*pp_dst)[1]) - 1;
300 301 memcpy(p_unique_ids, unique_ids_tmp, sizeof(Glyph::Id) * id_count); memcpy(p_unique_ids, unique_ids_tmp, sizeof(Glyph::Id) * id_count);
301 302 return VK_SUCCESS; return VK_SUCCESS;
 
... ... CB:
306 307 hash.id = unique_ids_tmp[--id_count]; hash.id = unique_ids_tmp[--id_count];
307 308 return_glyph(hash); return_glyph(hash);
308 309 } }
309 p_parent->p_allocator->deallocate(p_text->buffer);
310 p_parent->allocator.deallocate(p_text->buffer);
310 311 return res; return res;
311 312 } }
312 [[nodiscard]] jen::vk::Result jen::GlyphManager::
313 [[nodiscard]] jen::Result jen::GlyphManager::
313 314 get_glyph(Glyph::Hash hash, Glyph *p_dst) get_glyph(Glyph::Hash hash, Glyph *p_dst)
314 315 { {
315 316 Glyph::Id index; Glyph::Id index;
File src/graphics/draw_data/text_data/glyphs.h changed (mode: 100644) (index 0febd3c..52926c9)
1 1 #pragma once #pragma once
2 2
3 3 #include "../../draw_stages/draw_stages.h" #include "../../draw_stages/draw_stages.h"
4 #include "../../resources/text.h"
4 #include "../../resources.h"
5 5 #include "free_type.h" #include "free_type.h"
6 6
7 7 namespace jen { namespace jen {
 
... ... struct jen::GlyphManager
39 39 void hard_check_all_glyphs_debug(); void hard_check_all_glyphs_debug();
40 40
41 41 [[nodiscard]] Result [[nodiscard]] Result
42 text_update(Text::Layout, uint16_t pixel_size, Text::Chars,
43 Text::Colors_RGBA, vk::Frame, Text **pp_text);
42 text_update(TextLayout, uint16_t pixel_size, Chars,
43 Colors_RGBA, vk::Frame, GpuText **pp_text);
44 44
45 45 [[nodiscard]] Result [[nodiscard]] Result
46 create_buffer(Text::Chars, Text::Colors_RGBA, Text::Layout,
47 Text *p_text, Text **pp_dst);
46 create_buffer(Chars, Colors_RGBA, TextLayout,
47 GpuText *p_text, GpuText **pp_dst);
48 48
49 49
50 50 free_type::Font font; free_type::Font font;
File src/graphics/draw_data/text_data/text_data.cpp changed (mode: 100644) (index 2fe154e..6bf076c)
2 2
3 3 #include <algorithm> #include <algorithm>
4 4
5 [[nodiscard]] jen::vk::Result jen::vk::TextData::
6 init(DeviceBufferAllocator *p_a, const stages::Fonts &stage)
5 [[nodiscard]] jen::Result jen::vk::TextData::
6 init(DeviceBufferAllocator a, const stages::Fonts &stage)
7 7 { {
8 p_allocator = p_a;
8 allocator = a;
9 9
10 10 fonts.init(); fonts.init();
11 11
12 Result res = atlas.init(p_a, stage.extent);
12 Result res = atlas.init(a, stage.extent);
13 13 if (res != VK_SUCCESS) if (res != VK_SUCCESS)
14 14 return res; return res;
15 15
 
... ... init(DeviceBufferAllocator *p_a, const stages::Fonts &stage)
27 27 return VK_SUCCESS; return VK_SUCCESS;
28 28
29 29 C_FT: freeType.destroy(); C_FT: freeType.destroy();
30 C_AB: atlas.destroy(p_a);
30 C_AB: atlas.destroy(a);
31 31 return VK_ERROR_OUT_OF_HOST_MEMORY; return VK_ERROR_OUT_OF_HOST_MEMORY;
32 32 } }
33 33
34 inline void jen::vk::TextData::__destroy(Text *p_text)
34 inline void jen::vk::TextData::__destroy(GpuText *p_text)
35 35 { {
36 36 GlyphManager::Glyph::Hash hash; GlyphManager::Glyph::Hash hash;
37 37 hash.size = p_text->size; hash.size = p_text->size;
 
... ... inline void jen::vk::TextData::__destroy(Text *p_text)
42 42 p_font->return_glyph(hash); p_font->return_glyph(hash);
43 43 } }
44 44
45 p_allocator->deallocate(p_text->buffer);
45 allocator.deallocate(p_text->buffer);
46 46 jl::deallocate(&p_text); jl::deallocate(&p_text);
47 47 } }
48 48
49 void jen::vk::TextData::destroy(Text *p_text, uint32_t index)
49 void jen::vk::TextData::destroy(GpuText *p_text, uint32_t index)
50 50 { {
51 51 texts.remove(index); texts.remove(index);
52 52 __destroy(p_text); __destroy(p_text);
 
... ... create_font(const char *path, GlyphManager **pp_dst) {
79 79 } }
80 80
81 81
82 void jen::vk::TextData::destroy(Text *p_text) {
82 void jen::vk::TextData::destroy(GpuText *p_text) {
83 83 if (p_text == nullptr) if (p_text == nullptr)
84 84 return; return;
85 Text **p_p_to_remove;
85 GpuText **p_p_to_remove;
86 86 if (texts.find(p_text, &p_p_to_remove)) { if (texts.find(p_text, &p_p_to_remove)) {
87 87 texts.remove(p_p_to_remove); texts.remove(p_p_to_remove);
88 88 __destroy(p_text); __destroy(p_text);
 
... ... void jen::vk::TextData::destroy()
105 105 fonts.destroy(); fonts.destroy();
106 106
107 107 freeType.destroy(); freeType.destroy();
108 atlas .destroy(p_allocator);
108 atlas .destroy(allocator);
109 109 } }
110 110
111 111
112 [[nodiscard]] jen::vk::Result jen::vk::TextData::Debug::init(TextData *p_parent)
112 [[nodiscard]] jen::Result jen::vk::TextData::Debug::init(TextData *p_parent)
113 113 { {
114 114 using namespace vkw; using namespace vkw;
115 115
 
... ... void jen::vk::TextData::destroy()
119 119
120 120 uint32_t buffer_usage = BufferUsage::TRANSFER_DST uint32_t buffer_usage = BufferUsage::TRANSFER_DST
121 121 | BufferUsage::VERTEX | BufferUsage::INDEX; | BufferUsage::VERTEX | BufferUsage::INDEX;
122 res = p_parent->p_allocator->
122 res = p_parent->allocator.
123 123 allocate(buffer_size, 0, DevMemUsage::DYNAMIC_DST, buffer_usage, allocate(buffer_size, 0, DevMemUsage::DYNAMIC_DST, buffer_usage,
124 124 true, &buffer); true, &buffer);
125 125 if (res != VK_SUCCESS) if (res != VK_SUCCESS)
File src/graphics/draw_data/text_data/text_data.h changed (mode: 100644) (index 4fb39d2..cdd8a4d)
... ... namespace jen::vk
13 13 { {
14 14 [[nodiscard]] Result init(TextData *p_font_data); [[nodiscard]] Result init(TextData *p_font_data);
15 15 void destroy(TextData *p_parent) { void destroy(TextData *p_parent) {
16 p_parent->p_allocator->deallocate(buffer);
16 p_parent->allocator.deallocate(buffer);
17 17 } }
18 18
19 19 DeviceBufferPart buffer; DeviceBufferPart buffer;
 
... ... namespace jen::vk
22 22 Debug debug; Debug debug;
23 23 #endif #endif
24 24
25 [[nodiscard]] Result init(DeviceBufferAllocator*, const stages::Fonts&);
25 [[nodiscard]] Result init(DeviceBufferAllocator, const stages::Fonts&);
26 26 void destroy(); void destroy();
27 27
28 inline void __destroy(Text *p_text);
29 void destroy(Text *p_text);
30 void destroy(Text *p_text, uint32_t index);
28 inline void __destroy(GpuText *p_text);
29 void destroy(GpuText *p_text);
30 void destroy(GpuText *p_text, uint32_t index);
31 31
32 void destroy_old(Text *p_text);
32 void destroy_old(GpuText *p_text);
33 33
34 34 void clean_destroy_marked(Frame); void clean_destroy_marked(Frame);
35 35
 
... ... namespace jen::vk
39 39 } }
40 40
41 41 jl::darray<GlyphManager*> fonts; jl::darray<GlyphManager*> fonts;
42 jl::darray<Text*> texts;
42 jl::darray<GpuText*> texts;
43 43 free_type::Library freeType; free_type::Library freeType;
44 44 AtlasBuffer atlas; AtlasBuffer atlas;
45 DeviceBufferAllocator *p_allocator;
45 DeviceBufferAllocator allocator;
46 46 }; };
47 47 } }
File src/graphics/draw_stages/attachment.cpp changed (mode: 100644) (index 2d608bb..666f13c)
1 1 #include "attachment.h" #include "attachment.h"
2 #include "../../device/device.h"
2 3
3 4 [[nodiscard]] vkw::Result jen::vk::Attachment:: [[nodiscard]] vkw::Result jen::vk::Attachment::
4 5 transitionLayout(Device *p_dd, CmdData *p_cmds, const CreateInfo &info) transitionLayout(Device *p_dd, CmdData *p_cmds, const CreateInfo &info)
 
... ... init(Device *p_dd, CmdData *p_cmds, const CreateInfo &info)
64 65
65 66 format = info.format; format = info.format;
66 67
67 ImageInfo ci; {
68 GpuImageInfo ci; {
68 69 ci.extent = {info.extent,1}; ci.extent = {info.extent,1};
69 70 ci.layer_count = info.flags & CUBE ? 6 : 1; ci.layer_count = info.flags & CUBE ? 6 : 1;
70 71 ci.mip_level_count = info.mip_level_count; ci.mip_level_count = info.mip_level_count;
 
... ... init(Device *p_dd, CmdData *p_cmds, const CreateInfo &info)
75 76 ci.flags = info.flags & CUBE ? ImFlag::CUBE_COMPATIBLE : ImFlag::NONE; ci.flags = info.flags & CUBE ? ImFlag::CUBE_COMPATIBLE : ImFlag::NONE;
76 77 ci.tiling = vkw::Tiling::OPTIMAL; ci.tiling = vkw::Tiling::OPTIMAL;
77 78 } }
78 ViewInfo vi; {
79 GpuImageViewInfo vi; {
79 80 vi.type = info.flags & CUBE ? ImViewType::CUBE : ImViewType::_2D; vi.type = info.flags & CUBE ? ImViewType::CUBE : ImViewType::_2D;
80 81 vi.aspect = {}; vi.aspect = {};
81 82 if (info.flags & COLOR) if (info.flags & COLOR)
File src/graphics/draw_stages/attachment.h changed (mode: 100644) (index 2cc1c0e..0a53bd2)
1 1 #pragma once #pragma once
2
3 #include "gpu_image.h"
2 #include <jen/detail/gpu_image.h>
4 3 #include "../cmd_data.h" #include "../cmd_data.h"
5 4
6 5 namespace jen::vk namespace jen::vk
7 6 { {
8 enum AttachmentFlags : uint32_t {
9 DEPTH = 0b00001,
10 COLOR = 0b00010,
11 CUBE = 0b00100
12 };
13 using AttachmentMask = uint32_t;
7 enum AttachmentFlags : uint32_t {
8 DEPTH = 0b00001,
9 COLOR = 0b00010,
10 CUBE = 0b00100
11 };
12 using AttachmentMask = uint32_t;
14 13
15 struct Attachment
16 {
17 GpuImage<GpuImageMode::VIEW> gpu_image;
18 VkFormat format;
19 bool is_initialized;
14 struct Attachment
15 {
16 GpuImage<GpuImageMode::VIEW> gpu_image;
17 VkFormat format;
18 bool is_initialized;
20 19
21 struct CreateInfo {
22 AttachmentMask flags;
23 vkw::Extent2D extent;
24 uint32_t mip_level_count;
25 VkFormat format;
26 vkw::Samples samples;
27 vkw::ImUsageMask usage;
28 vkw::AccessMask access_consumer;
29 vkw::StageMask stage_consumer;
30 };
20 struct CreateInfo {
21 AttachmentMask flags;
22 vkw::Extent2D extent;
23 uint32_t mip_level_count;
24 VkFormat format;
25 vkw::Samples samples;
26 vkw::ImUsageMask usage;
27 vkw::AccessMask access_consumer;
28 vkw::StageMask stage_consumer;
29 };
31 30
32 [[nodiscard]] Result
33 init(Device *p_dd, CmdData *p_cmds, const CreateInfo&);
34 void destroy(Device*);
35 [[nodiscard]] Result
36 transitionLayout(Device *p_dd, CmdData *p_cmds, const CreateInfo&);
37 };
31 [[nodiscard]] Result
32 init(Device *p_dd, CmdData *p_cmds, const CreateInfo&);
33 void destroy(Device*);
34 [[nodiscard]] Result
35 transitionLayout(Device *p_dd, CmdData *p_cmds, const CreateInfo&);
36 };
38 37 } }
File src/graphics/draw_stages/clusters.cpp changed (mode: 100644) (index 063c37a..3999531)
1 1 #include "clusters.h" #include "clusters.h"
2 #include "../../device/device.h"
2 3
3 [[nodiscard]] jen::vk::Result jen::vk::clusters::DescrSet::
4 init(jen::vk::Device *p_dev, jen::vk::DeviceBufferPart *p_buf, Frame f,
4 [[nodiscard]] jen::Result jen::vk::clusters::DescrSet::
5 init(jen::Device *p_dev, jen::DeviceBufferPart *p_buf, Frame f,
5 6 vkw::DescrPool pool, vkw::DescrLayout layout) vkw::DescrPool pool, vkw::DescrLayout layout)
6 7 { {
7 8 vkw::DeviceSize offset = BUFFER_SIZE * f; vkw::DeviceSize offset = BUFFER_SIZE * f;
 
... ... void jen::vk::clusters::DescrSet::destroy(Device *p_dev, vkw::DescrPool pool) {
40 41 offsets_view.destroy(*p_dev); offsets_view.destroy(*p_dev);
41 42 } }
42 43
43 [[nodiscard]] jen::vk::Result jen::vk::clusters::BufferDevice::
44 init(jen::vk::Device *p_dev, vkw::DescrPool pool)
44 [[nodiscard]] jen::Result jen::vk::clusters::BufferDevice::
45 init(jen::Device *p_dev, vkw::DescrPool pool)
45 46 { {
46 47 Result res; Result res;
47 48 uint32_t buse = vkw::BufferUsage::STORAGE uint32_t buse = vkw::BufferUsage::STORAGE
File src/graphics/draw_stages/clusters.h changed (mode: 100644) (index 6a01ad8..d2664b1)
1 1 #pragma once #pragma once
2
3 #include "gpu_image.h"
4 #include <math/vector.h>
2 #include <jen/detail/gpu_image.h>
5 3 #include <vkw/descriptor_pool.h> #include <vkw/descriptor_pool.h>
6 #include "descriptors.h"
7 4 #include "../cmd_data.h" #include "../cmd_data.h"
8
9 namespace jen {
10 constexpr static const uint32_t MAX_LIGHTS_COUNT = 512;
11 constexpr static const uint32_t MAX_LIGHTS_COUNT_IN_CLUSTER = 128;
12
13 struct Light {
14 math::v3f pos;
15 float radius;
16 math::v4f color;
17 float znear;
18 float zfar;
19 float __junk[2];
20
21 [[nodiscard]] bool operator == (const Light &l) {
22 return memcmp(this, &l, offsetof(Light,__junk)) == 0;
23 }
24 [[nodiscard]] bool operator != (const Light &l) {
25 return not (*this == l);
26 }
27 };
28 static_assert(sizeof(Light) % 16 == 0);
29 }
5 #include <jen/light.h>
30 6
31 7 namespace jen::vk::clusters namespace jen::vk::clusters
32 8 { {
File src/graphics/draw_stages/composition/composition.cpp changed (mode: 100644) (index 7ec069a..a0ad25a)
2 2
3 3 using Composition = jen::vk::stages::Composition; using Composition = jen::vk::stages::Composition;
4 4
5 [[nodiscard]] jen::vk::Result
5 [[nodiscard]] jen::Result
6 6 create_pipeline(vkw::Device device, vkw::RenderPass pass, vkw::Extent2D extent, create_pipeline(vkw::Device device, vkw::RenderPass pass, vkw::Extent2D extent,
7 7 Composition *p_data) Composition *p_data)
8 8 { {
 
... ... create_pipeline(vkw::Device device, vkw::RenderPass pass, vkw::Extent2D extent,
81 81 } }
82 82
83 83
84 [[nodiscard]] jen::vk::Result Composition::
84 [[nodiscard]] jen::Result Composition::
85 85 init(vkw::Device dev, PassMain *p_pass, vkw::DescrPool pool, vkw::Extent2D ext) init(vkw::Device dev, PassMain *p_pass, vkw::DescrPool pool, vkw::Extent2D ext)
86 86 { {
87 87 Result result = shaders.init(dev); Result result = shaders.init(dev);
 
... ... C_SHADERS: shaders.destroy(dev);
105 105 return result; return result;
106 106 } }
107 107
108 [[nodiscard]] jen::vk::Result Composition::
108 [[nodiscard]] jen::Result Composition::
109 109 update(vkw::Device device, PassMain *p_pass, vkw::Extent2D extent) { update(vkw::Device device, PassMain *p_pass, vkw::Extent2D extent) {
110 110 descriptor.update(device, p_pass->attachments.hdr.gpu_image.view); descriptor.update(device, p_pass->attachments.hdr.gpu_image.view);
111 111 if (not pipeline.is_null()) { if (not pipeline.is_null()) {
File src/graphics/draw_stages/composition/composition.h changed (mode: 100644) (index b62c9fa..641bc15)
1 1 #pragma once #pragma once
2
3 2 #include "../pass_main.h" #include "../pass_main.h"
4
5 3 #include "../shaders.h" #include "../shaders.h"
6 #include "../descriptors.h"
4 #include <jen/detail/descriptors.h>
7 5
8 6 namespace jen::vk::stages namespace jen::vk::stages
9 7 { {
 
... ... namespace jen::vk::stages
16 14 void void
17 15 destroy(vkw::Device, vkw::DescrPool); destroy(vkw::Device, vkw::DescrPool);
18 16
19 vkw::Pipeline pipeline;
20 vkw::PipelineLayout pipelineLayout;
21 Descriptors::ImageView descriptor;
17 vkw::Pipeline pipeline;
18 vkw::PipelineLayout pipelineLayout;
19 DescriptorImageView descriptor;
22 20
23 21 constexpr static const char SH_VERT[] = "shaders/composition/vertex.spv"; constexpr static const char SH_VERT[] = "shaders/composition/vertex.spv";
24 22 constexpr static const char SH_FRAG[] = "shaders/composition/fragment.spv"; constexpr static const char SH_FRAG[] = "shaders/composition/fragment.spv";
File src/graphics/draw_stages/descriptors.cpp deleted (index b6f0fab..0000000)
1 #include "descriptors.h"
2
3
4 [[nodiscard]] jen::vk::Result jen::vk::Descriptors::UniformBuffer::
5 init(Device *p_dev, vkw::DeviceSize size)
6 {
7 vkw::DeviceSize align;
8 align = jl::max(p_dev->properties.limits.minUniformBufferOffsetAlignment,
9 p_dev->properties.limits.nonCoherentAtomSize);
10
11 uint32_t use = vkw::BufferUsage::TRANSFER_DST | vkw::BufferUsage::UNIFORM;
12 Result res;
13 res = p_dev->buffer_allocator.allocate(size, align, DevMemUsage::DYNAMIC_DST,
14 use, true, &allocation);
15 if (res != VK_SUCCESS)
16 return res;
17 isFlushNeeded = (allocation.mem_props & vkw::MemProp::HOST_COHERENT) == 0;
18 return res;
19 }
20
21 [[nodiscard]] jen::vk::Result
22 create_buffer(jen::vk::Device *p_dev,
23 jen::vk::Descriptors::UniformDynamic *p_set, vkw::DeviceSize size,
24 uint32_t count)
25 {
26 vkw::DeviceSize alignment;
27 alignment = jl::max(p_dev->properties.limits.minUniformBufferOffsetAlignment,
28 p_dev->properties.limits.nonCoherentAtomSize);
29 p_set->single_size = size;
30 p_set->aligned_size = math::round_up(size, alignment);
31 p_set->size = p_set->aligned_size * count;
32
33 uint32_t use = vkw::BufferUsage::TRANSFER_DST | vkw::BufferUsage::UNIFORM;
34 auto res = p_dev->buffer_allocator
35 .allocate(p_set->size, alignment, jen::vk::DevMemUsage::DYNAMIC_DST,
36 use, true, &p_set->allocation);
37 if (res != VK_SUCCESS)
38 return res;
39 p_set->isFlushNeeded = p_set->allocation.is_flush_needed();
40 return res;
41 }
42 [[nodiscard]] jen::vk::Result
43 create_set(vkw::Device dev, jen::vk::Descriptors::UniformDynamic *p_set,
44 uint32_t bind_no, vkw::DescrPool pool)
45 {
46 jen::vk::Result res;
47 res = pool.allocate_set(dev, p_set->layout, &p_set->set);
48 if (res != VK_SUCCESS)
49 return res;
50 vkw::DescrBuffer info; {
51 info.offset = p_set->allocation.offset();
52 info.size = p_set->allocation.size();
53 info.buffer = p_set->allocation.buffer;
54 }
55 p_set->set.set(dev, bind_no, vkw::DescrType::UNIFORM_BUFFER_DYNAMIC, info);
56 return VK_SUCCESS;
57 }
58 [[nodiscard]] jen::vk::Result jen::vk::Descriptors::UniformDynamic::
59 init(Device *p_dev, vkw::DeviceSize size,
60 uint32_t count, vkw::DescrBind binding, vkw::DescrPool pool)
61 {
62 Result res;
63 res = layout.init(p_dev->device, binding);
64 if (res != VK_SUCCESS)
65 return res;
66 res = create_buffer(p_dev, this, size, count);
67 if (res != VK_SUCCESS)
68 goto C_LAYOUT;
69 res = create_set(p_dev->device, this, binding.bind_no, pool);
70 if (res != VK_SUCCESS)
71 goto C_BUFFER;
72 return res;
73 C_BUFFER: p_dev->buffer_allocator.deallocate(allocation);
74 C_LAYOUT: layout.destroy(p_dev->device);
75 return res;
76 }
77 void jen::vk::Descriptors::UniformDynamic::
78 destroy(Device *p_dev, vkw::DescrPool pool) {
79 pool.deallocate_sets(*p_dev, set);
80 layout.destroy(*p_dev);
81 p_dev->buffer_allocator.deallocate(allocation);
82 }
83 [[nodiscard]] jen::vk::Result jen::vk::Descriptors::Textures::Pool::
84 init(vkw::Device device) {
85 consumed = 0;
86 vkw::DescrPoolPart part; {
87 part.type = vkw::DescrType::COMBINED_IMAGE_SAMPLER;
88 part.count = MAX;
89 }
90 return pool.init(device, vkw::DescrPool::Flag::FREE_DESCRIPTOR_SET, part, MAX);
91 }
92
93 [[nodiscard]] jen::vk::Result jen::vk::Descriptors::Textures::
94 init(vkw::Device device) {
95 Result res;
96 vkw::DescrBind binding(0, vkw::DescrType::COMBINED_IMAGE_SAMPLER,
97 1, vkw::ShaderStage::FRAGMENT);
98 res = layout.init(device, binding);
99 if (res != VK_SUCCESS) goto CANCEL;
100 if (not pools.init(2))
101 goto C_LAYOUT;
102 if (not lock.init()) {
103 pools.destroy();
104 goto C_LAYOUT;
105 }
106 return res;
107 C_LAYOUT: layout.destroy(device);
108 CANCEL: return res;
109 }
110
111 void jen::vk::Descriptors::Textures::destroy(vkw::Device device) {
112 jassert_soft(pools.count() == 0, "descriptors not cleaned up");
113 pools.destroy([](auto &i, auto dev){ i.destroy(dev);}, device);
114 lock.destroy();
115 layout.destroy(device);
116 }
117
118 [[nodiscard]] jen::vk::Result jen::vk::Descriptors::Textures::
119 create(vkw::Device dev, vkw::Sampler sampler, vkw::ImView view, Set *p_dst) {
120 Result res;
121 uint_fast8_t pool_index = 0;
122 lock.lock();
123 {
124 for (;pool_index < pools.count(); ++pool_index)
125 if (pools[pool_index].consumed < Pool::MAX)
126 goto POOL_READY;
127 if (not pools.insert_dummy())
128 return VK_ERROR_OUT_OF_HOST_MEMORY;
129 res = pools[pool_index].init(dev);
130 if (res != VK_SUCCESS)
131 goto C_ARRAY;
132
133 POOL_READY:
134 res = pools[pool_index].pool.allocate_set(dev, layout, &p_dst->set);
135 if (res != VK_SUCCESS)
136 goto C_POOL;
137 ++pools[pool_index].consumed;
138 p_dst->pool = pools[pool_index].pool;
139
140 vkw::DescrImage info; {
141 info.sampler = sampler;
142 info.imageView = view;
143 info.imageLayout = vkw::ImLayout::SHADER_READ_ONLY;
144 }
145 p_dst->set.set(dev, 0, vkw::DescrType::COMBINED_IMAGE_SAMPLER, info);
146 }
147 lock.unlock();
148 return VK_SUCCESS;
149
150 C_POOL: pools[pool_index].destroy(dev);
151 C_ARRAY: pools.remove_last();
152 lock.unlock();
153 return res;
154 }
155
156 void jen::vk::Descriptors::Textures::destroy(vkw::Device device, Set set) {
157 lock.lock();
158 {
159 set.pool.deallocate_sets(device, set.set);
160 uint_fast8_t i = 0;
161 for (; pools[i].pool != set.pool; ++i)
162 jassert(i < pools.count(), "descriptor texture set has incorrect pool");
163
164 --pools[i].consumed;
165 if (pools[i].consumed == 0) {
166 pools[i].destroy(device);
167 pools.remove(i);
168 }
169 }
170 lock.unlock();
171 }
172
173 [[nodiscard]] jen::vk::Result jen::vk::Descriptors::ImageView::
174 init(vkw::Device dev, vkw::DescrPool p, vkw::ImView v, vkw::Sampler s) {
175 Result res;
176 vkw::DescrBind binding(0, s.is_null() ? DESCR_TYPE : DESCR_TYPE_SAMPLER,
177 1, vkw::ShaderStage::FRAGMENT);
178 res = layout.init(dev, binding);
179 if (res != VK_SUCCESS)
180 return res;
181 res = p.allocate_set(dev, layout, &set);
182 if (res != VK_SUCCESS)
183 layout.destroy(dev);
184 else
185 update(dev, v, s);
186 return res;
187 }
188
189 void jen::vk::Descriptors::ImageView::
190 update(vkw::Device d, vkw::ImView v, vkw::Sampler s) {
191 vkw::DescrImage i; {
192 i.sampler = s;
193 i.imageView = v;
194 i.imageLayout = vkw::ImLayout::SHADER_READ_ONLY;
195 } set.set(d, 0, s.is_null() ? DESCR_TYPE : DESCR_TYPE_SAMPLER, i);
196 }
197
198 void jen::vk::Descriptors::ImageView::
199 destroy(vkw::Device device, vkw::DescrPool pool) {
200 pool.deallocate_sets(device, set);
201 layout.destroy(device);
202 }
203
File src/graphics/draw_stages/descriptors.h deleted (index 8aeeff6..0000000)
1 #pragma once
2
3 #include <math.h>
4 #include <jlib/darray.h>
5 #include <jlib/threads.h>
6 #include <vkw/descriptor_pool.h>
7
8 #include "../../device/device.h"
9
10 namespace jen::vk::Descriptors
11 {
12 struct UniformBuffer {
13 [[nodiscard]] Result
14 init(Device *p_dev, vkw::DeviceSize size);
15 void destroy(Device *p_dev) {
16 p_dev->buffer_allocator.deallocate(allocation);
17 }
18 DeviceBufferPart allocation;
19 bool isFlushNeeded;
20 };
21
22 struct UniformDynamic : UniformBuffer
23 {
24 [[nodiscard]] Result
25 init(Device*, vkw::DeviceSize size, uint32_t count,
26 vkw::DescrBind, vkw::DescrPool);
27
28 void destroy(Device*, vkw::DescrPool);
29
30 vkw::DescrSet set;
31 vkw::DescrLayout layout;
32 vkw::DeviceSize aligned_size;
33 vkw::DeviceSize single_size;
34 vkw::DeviceSize size;
35
36 [[nodiscard]] uint32_t offset(uint32_t index) const {
37 auto offset = aligned_size * index;
38 jassert(offset < allocation.size(),"buffer offset overflow");
39 return uint32_t(offset);
40 }
41 [[nodiscard]] uint8_t* p_data(uint32_t index) {
42 return allocation.p_data() + offset(index);
43 }
44 [[nodiscard]] Result
45 flush(Device *p_dev, uint32_t index) {
46 if (not isFlushNeeded)
47 return VK_SUCCESS;
48 auto atom = p_dev->properties.limits.nonCoherentAtomSize;
49 vkw::MemoryRange range;
50 if (size <= atom)
51 range = {allocation.memory, allocation.offset(), size};
52 else if (single_size <= atom)
53 range = {allocation.memory, offset(index), atom};
54 else {
55 range = {allocation.memory, allocation.offset(),
56 math::round_up(size, atom)};
57 }
58 return vkw::flush_memory(p_dev->device, range);
59 }
60 };
61
62 struct Textures
63 {
64 struct Set {
65 vkw::DescrPool pool;
66 vkw::DescrSet set;
67 };
68
69 [[nodiscard]] Result init(vkw::Device);
70 void destroy(vkw::Device);
71
72 [[nodiscard]] Result
73 create(vkw::Device, vkw::Sampler, vkw::ImView, Set *p_dst);
74 void destroy(vkw::Device, Set);
75
76
77 struct Pool
78 {
79 static constexpr uint_fast8_t MAX = 255;
80
81 [[nodiscard]] Result init(vkw::Device device);
82 void destroy(vkw::Device device) { pool.destroy(device); }
83
84 vkw::DescrPool pool;
85 uint_fast8_t consumed;
86 };
87
88 jl::darray<Pool> pools;
89 jth::Spinlock lock;
90 vkw::DescrLayout layout;
91 };
92
93 struct ImageView
94 {
95 constexpr static const auto DESCR_TYPE = vkw::DescrType::INPUT_ATTACHMENT;
96 constexpr static const auto DESCR_TYPE_SAMPLER
97 = vkw::DescrType::COMBINED_IMAGE_SAMPLER;
98
99 [[nodiscard]] Result init(vkw::Device, vkw::DescrPool, vkw::ImView,
100 vkw::Sampler = {});
101
102 void update(vkw::Device, vkw::ImView, vkw::Sampler = {});
103
104 void destroy(vkw::Device, vkw::DescrPool);
105
106 vkw::DescrSet set;
107 vkw::DescrLayout layout;
108 };
109 }
110
File src/graphics/draw_stages/draw_stages.cpp changed (mode: 100644) (index df3dcfb..86d4873)
2 2
3 3 //mingw constant //mingw constant
4 4 #undef TRUE #undef TRUE
5 [[nodiscard]] jen::vk::Result
5 [[nodiscard]] jen::Result
6 6 create_sampler(vkw::Device device, vkw::Sampler *p_sampler) create_sampler(vkw::Device device, vkw::Sampler *p_sampler)
7 7 { {
8 8 vkw::SamplerInfo info; { vkw::SamplerInfo info; {
 
... ... create_sampler(vkw::Device device, vkw::Sampler *p_sampler)
22 22 return p_sampler->init(device, info); return p_sampler->init(device, info);
23 23 } }
24 24
25 [[nodiscard]] jen::vk::Result
25 [[nodiscard]] jen::Result
26 26 create_pool(vkw::Device device, vkw::DescrPool *p_pool) { create_pool(vkw::Device device, vkw::DescrPool *p_pool) {
27 27 jl::array<vkw::DescrPoolPart,6> parts = { vkw::DescrPoolPart jl::array<vkw::DescrPoolPart,6> parts = { vkw::DescrPoolPart
28 28 { vkw::DescrType::INPUT_ATTACHMENT, 1}, { vkw::DescrType::INPUT_ATTACHMENT, 1},
 
... ... create_pool(vkw::Device device, vkw::DescrPool *p_pool) {
40 40 parts, count); parts, count);
41 41 } }
42 42
43 [[nodiscard]] jen::vk::Result jen::vk::DrawStages::
43 [[nodiscard]] jen::Result jen::vk::DrawStages::
44 44 init(math::v2i32 framebuffer_extent, vkw::Surface surface, init(math::v2i32 framebuffer_extent, vkw::Surface surface,
45 45 Device *p_dev, CmdData *p_cmds, const GraphicsSettings &settings) Device *p_dev, CmdData *p_cmds, const GraphicsSettings &settings)
46 46 { {
 
... ... void jen::vk::DrawStages::destroy(Device *p_dev) {
138 138 textureSampler.destroy(*p_dev); textureSampler.destroy(*p_dev);
139 139 } }
140 140
141 [[nodiscard]] jen::vk::Result jen::vk::DrawStages::
141 [[nodiscard]] jen::Result jen::vk::DrawStages::
142 142 recreate(math::v2i32 framebuffer_extent, vkw::Surface surface, recreate(math::v2i32 framebuffer_extent, vkw::Surface surface,
143 143 Device *p_dev, CmdData *p_cmds, const GraphicsSettings &settings) Device *p_dev, CmdData *p_cmds, const GraphicsSettings &settings)
144 144 { {
 
... ... recreate(math::v2i32 framebuffer_extent, vkw::Surface surface,
182 182 return res; return res;
183 183 } }
184 184
185 [[nodiscard]] jen::vk::Result jen::vk::DrawStages::
185 [[nodiscard]] jen::Result jen::vk::DrawStages::
186 186 status(math::v2i32 framebuffer_extent) { status(math::v2i32 framebuffer_extent) {
187 187 if (not swap_chain.is_ready) if (not swap_chain.is_ready)
188 188 return VK_INCOMPLETE; return VK_INCOMPLETE;
File src/graphics/draw_stages/fonts/fonts.cpp changed (mode: 100644) (index 9f9d101..63d2b06)
1 1 #include "fonts.h" #include "fonts.h"
2 2
3 [[nodiscard]] jen::vk::Result
3 [[nodiscard]] jen::Result
4 4 create_pipelineLayout(vkw::Device dev, vkw::DescrLayout layout, create_pipelineLayout(vkw::Device dev, vkw::DescrLayout layout,
5 5 vkw::PipelineLayout *p_dst) vkw::PipelineLayout *p_dst)
6 6 { {
 
... ... create_pipelineLayout(vkw::Device dev, vkw::DescrLayout layout,
8 8 return p_dst->init(dev, layout, range); return p_dst->init(dev, layout, range);
9 9 } }
10 10
11 [[nodiscard]] jen::vk::Result
11 [[nodiscard]] jen::Result
12 12 create_pipeline(vkw::Device device, vkw::RenderPass renderPass, create_pipeline(vkw::Device device, vkw::RenderPass renderPass,
13 13 jen::vk::stages::Fonts *p_data, vkw::Extent2D extent) jen::vk::stages::Fonts *p_data, vkw::Extent2D extent)
14 14 { {
 
... ... create_pipeline(vkw::Device device, vkw::RenderPass renderPass,
112 112 return p_data->pipeline.init(device, info); return p_data->pipeline.init(device, info);
113 113 } }
114 114
115 [[nodiscard]] jen::vk::Result jen::vk::stages::Fonts::
115 [[nodiscard]] jen::Result jen::vk::stages::Fonts::
116 116 init(Device *p_dev, vkw::RenderPass pass, vkw::Extent2D draw_extent, init(Device *p_dev, vkw::RenderPass pass, vkw::Extent2D draw_extent,
117 117 vkw::DescrPool pool) vkw::DescrPool pool)
118 118 { {
 
... ... init(Device *p_dev, vkw::RenderPass pass, vkw::Extent2D draw_extent,
125 125 extent.width = uint32_t(mode->width); extent.width = uint32_t(mode->width);
126 126 extent.height = uint32_t(mode->height); extent.height = uint32_t(mode->height);
127 127
128 ImageInfo ci; {
128 GpuImageInfo ci; {
129 129 ci.extent = {extent, 1}; ci.extent = {extent, 1};
130 130 ci.layer_count = 1; ci.layer_count = 1;
131 131 ci.mip_level_count = 1; ci.mip_level_count = 1;
 
... ... init(Device *p_dev, vkw::RenderPass pass, vkw::Extent2D draw_extent,
136 136 ci.flags = {}; ci.flags = {};
137 137 ci.tiling = vkw::Tiling::OPTIMAL; ci.tiling = vkw::Tiling::OPTIMAL;
138 138 } }
139 ViewInfo vi; {
139 GpuImageViewInfo vi; {
140 140 vi.type = vkw::ImViewType::_2D; vi.type = vkw::ImViewType::_2D;
141 141 vi.aspect = vkw::ImAspect::COLOR; vi.aspect = vkw::ImAspect::COLOR;
142 142 } }
 
... ... init(Device *p_dev, vkw::RenderPass pass, vkw::Extent2D draw_extent,
154 154 si.borderColor = vkw::BorderColor::BLACK_OPAQUE_INT; si.borderColor = vkw::BorderColor::BLACK_OPAQUE_INT;
155 155 si.unnormalizedCoordinates = true; si.unnormalizedCoordinates = true;
156 156 } }
157 DescrInfo di{pool};
157 GpuImageDescrInfo di{pool};
158 158 res = atlas.init(p_dev, &ci, &vi, &si, &di); res = atlas.init(p_dev, &ci, &vi, &si, &di);
159 159 if (res != VK_SUCCESS) if (res != VK_SUCCESS)
160 160 goto C_SHADERS; goto C_SHADERS;
 
... ... C_SHADERS: shaders.destroy(p_dev->device);
176 176 return res; return res;
177 177 } }
178 178
179 [[nodiscard]] jen::vk::Result jen::vk::stages::Fonts::
179 [[nodiscard]] jen::Result jen::vk::stages::Fonts::
180 180 update(vkw::Device device, vkw::RenderPass render_pass, vkw::Extent2D extent) update(vkw::Device device, vkw::RenderPass render_pass, vkw::Extent2D extent)
181 181 { {
182 182 atlas.descriptor.update(device, atlas.view, atlas.sampler); atlas.descriptor.update(device, atlas.view, atlas.sampler);
File src/graphics/draw_stages/fonts/fonts.h changed (mode: 100644) (index 31f3fd4..572a7e2)
1 1 #pragma once #pragma once
2
3 2 #include "../pass_main.h" #include "../pass_main.h"
4 3 #include "../shaders.h" #include "../shaders.h"
5 #include "../descriptors.h"
6 #include "../gpu_image.h"
4 #include <jen/detail/gpu_image.h>
7 5
8 6 namespace jen::vk::stages namespace jen::vk::stages
9 7 { {
File src/graphics/draw_stages/gpu_image.cpp deleted (index d983dc0..0000000)
1 #include "gpu_image.h"
2
3 [[nodiscard]] jen::vk::Result jen::vk::detail::GpuImageExtraImage::
4 init_image(Device *p_dd, const ImageInfo &info)
5 {
6 vkw::ImInfo imageInfo; {
7 imageInfo.flags = info.flags;
8 imageInfo.type = info.type;
9 imageInfo.format = info.format;
10 imageInfo.extent = info.extent;
11 imageInfo.mipLevelCount = info.mip_level_count;
12 imageInfo.layerCount = info.layer_count;
13 imageInfo.sampleCount = info.samples;
14 imageInfo.tiling = info.tiling;
15 imageInfo.usageFlags = info.usage;
16 imageInfo.sharingMode = vkw::Sharing::EXCLUSIVE;
17 imageInfo.queueFamilyCount = 0;
18 imageInfo.p_queueFamilies = nullptr;
19 imageInfo.layout = vkw::ImLayout::UNDEFINED;
20 }
21 Result res = image.init(p_dd->device, imageInfo);
22 if (res != VK_SUCCESS)
23 return res;
24
25 vkw::MemPropMask mam = vkw::MemProp::DEVICE_LOCAL;
26 vkw::MemReqs memRs;
27 memRs = image.memoryRequirements(p_dd->device, p_dd->memory_properties, mam);
28 res = p_dd->memory_allocator.allocate(memRs, false, &memory);
29 if (res != VK_SUCCESS)
30 goto C_IMAGE;
31
32 res = image.bind_to_memory(p_dd->device, memory.memory, memory.part.offset);
33 if (res != VK_SUCCESS)
34 goto C_MEMORY;
35 return res;
36
37 C_MEMORY: p_dd->memory_allocator.deallocate(memory);
38 C_IMAGE: image.destroy(p_dd->device);
39 return res;
40 }
File src/graphics/draw_stages/gpu_image.h deleted (index 7e6a879..0000000)
1 #pragma once
2
3 #include "descriptors.h"
4
5 namespace jen::vk
6 {
7 struct ImageInfo {
8 vkw::Extent3D extent;
9 uint32_t layer_count;
10 uint32_t mip_level_count;
11 VkFormat format;
12 vkw::ImType type;
13 vkw::Samples samples;
14 vkw::ImUsageMask usage;
15 vkw::ImMask flags;
16 vkw::Tiling tiling;
17 };
18
19 struct ViewInfo {
20 vkw::ImViewType type;
21 vkw::ImAspectMask aspect;
22 };
23 struct DescrInfo {
24 vkw::DescrPool pool;
25 };
26 }
27 namespace jen::vk::detail
28 {
29 struct GpuImageExtraImage {
30 [[nodiscard]] Result
31 init_image(Device*, const ImageInfo&);
32 void destroy_image(Device *p_d) {
33 p_d->memory_allocator.deallocate(memory);
34 image.destroy(*p_d);
35 }
36
37 DeviceMemoryPart memory;
38 vkw::Image image;
39 };
40 template<bool> struct GpuImageExtraView {
41 [[nodiscard]] constexpr Result
42 init_view(Device*, const ImageInfo&, vkw::Image, const ViewInfo&) {
43 return VK_SUCCESS;
44 }
45 void destroy_view(vkw::Device) {}
46 protected:
47 constexpr static const vkw::ImView view = {};
48 };
49 template<> struct GpuImageExtraView<true> {
50 [[nodiscard]] Result
51 init_view(vkw::Device d, const ImageInfo &ii,
52 vkw::Image im, const ViewInfo&vi) {
53 return view.init(d, im, vi.type, ii.format,
54 {vi.aspect, ii.layer_count, ii.mip_level_count});
55 }
56 void destroy_view(vkw::Device d) {view.destroy(d);}
57 vkw::ImView view;
58 };
59
60 template<bool> struct GpuImageExtraSampler {
61 [[nodiscard]] constexpr Result
62 init_sampler(vkw::Device, const vkw::SamplerInfo&) {return VK_SUCCESS;}
63 void destroy_sampler(vkw::Device) {}
64 protected:
65 constexpr static const vkw::Sampler sampler = {};
66 };
67 template<> struct GpuImageExtraSampler<true> {
68 [[nodiscard]] Result
69 init_sampler(vkw::Device d, const vkw::SamplerInfo &si) {
70 return sampler.init(d, si);
71 }
72 void destroy_sampler(vkw::Device d) {sampler.destroy(d);}
73 vkw::Sampler sampler;
74 };
75
76 template<bool> struct GpuImageExtraDescriptor {
77 [[nodiscard]] constexpr Result
78 init_descr(vkw::Device, const DescrInfo&, vkw::ImView, vkw::Sampler) {
79 return VK_SUCCESS;
80 }
81 void destroy_descr(vkw::Device, vkw::DescrPool) {}
82 };
83 template<> struct GpuImageExtraDescriptor<true> {
84 [[nodiscard]] Result
85 init_descr(vkw::Device d, const DescrInfo&i, vkw::ImView v, vkw::Sampler s){
86 jassert(i.pool, "descriptor is used, but pool is invalid");
87 return descriptor.init(d, i.pool, v, s);
88 }
89 void destroy_descr(vkw::Device d, vkw::DescrPool p) {
90 jassert(p, "descriptor is used, but pool is invalid");
91 descriptor.destroy(d, p);
92 }
93 Descriptors::ImageView descriptor;
94 };
95 enum GpuImageExtras {
96 NONE, VIEW = 1, SAMPLER = 0b10, DESCRIPTOR = 0b100
97 };
98 }
99 namespace jen::vk
100 {
101 enum GpuImageMode {
102 NONE,
103 VIEW = detail::GpuImageExtras::VIEW,
104 SAMP = VIEW | detail::GpuImageExtras::SAMPLER,
105 DESCR = VIEW | detail::GpuImageExtras::DESCRIPTOR,
106 SAMP_DESCR = SAMP | DESCR
107 };
108 template<GpuImageMode M = GpuImageMode::NONE>
109 struct GpuImage :
110 detail::GpuImageExtraImage,
111 detail::GpuImageExtraView<((M & detail::GpuImageExtras::VIEW) > 0)>,
112 detail::GpuImageExtraSampler<((M & detail::GpuImageExtras::SAMPLER) > 0)>,
113 detail::GpuImageExtraDescriptor<((M&detail::GpuImageExtras::DESCRIPTOR) >0)>
114 {
115 [[nodiscard]] Result
116 init(Device *p_dd, const ImageInfo *p_ii,
117 const ViewInfo *p_vi = {}, const vkw::SamplerInfo *p_si = {},
118 const DescrInfo *p_di = {}) {
119 Result res = init_image(p_dd, *p_ii);
120 if (res != VK_SUCCESS)
121 return res;
122 auto d = p_dd->device;
123 res = this->init_view(d, *p_ii, image, *p_vi);
124 if (res != VK_SUCCESS)
125 goto DI;
126 res = this->init_sampler(d, *p_si);
127 if (res != VK_SUCCESS)
128 goto DV;
129 res = this->init_descr(d, *p_di, this->view, this->sampler);
130 if (res != VK_SUCCESS)
131 goto DS;
132 return res;
133
134 DS: this->destroy_sampler(d);
135 DV: this->destroy_view(d);
136 DI: destroy_image(p_dd);
137 return res;
138 }
139 void destroy(Device *p_d, vkw::DescrPool pool = {}) {
140 this->destroy_descr(*p_d,pool);
141 this->destroy_sampler(*p_d);
142 this->destroy_view(*p_d);
143 destroy_image(p_d);
144 }
145 };
146 }
File src/graphics/draw_stages/offscreen/offscreen.cpp changed (mode: 100644) (index 9648f3f..9d5cae5)
1 1 #include "offscreen.h" #include "offscreen.h"
2
2 #include "../../../device/device.h"
3 3 #include <jlib/array.h> #include <jlib/array.h>
4 4
5 5 using Offscreen = jen::vk::stages::Offscreen; using Offscreen = jen::vk::stages::Offscreen;
6 6
7 [[nodiscard]] jen::vk::Result
7 [[nodiscard]] jen::Result
8 8 create_pipelineLayout(vkw::Device dev, const Offscreen::DescriptorSets &sets, create_pipelineLayout(vkw::Device dev, const Offscreen::DescriptorSets &sets,
9 9 vkw::DescrLayout shadow_map_layout, vkw::DescrLayout shadow_map_layout,
10 10 vkw::DescrLayout clusters_layout, vkw::DescrLayout clusters_layout,
 
... ... using namespace jen::vk::stages;
97 97 case GraphicsSettings::DrawMode::POINTS: case GraphicsSettings::DrawMode::POINTS:
98 98 rasterization.polygon_mode = vkw::PolygonMode::POINT; break; rasterization.polygon_mode = vkw::PolygonMode::POINT; break;
99 99 } }
100 rasterization.cull_mode = settings.cull_mode;
101 rasterization.frontFace = FrontFace::CLOCKWISE;
100 rasterization.cull_mode = vkw::CullMode(settings.cull_mode);
101 rasterization.frontFace = FrontFace::CLOCKWISE;
102 102 rasterization.depthBias.set_disabled(); rasterization.depthBias.set_disabled();
103 rasterization.lineWidth = 1.0f;
103 rasterization.lineWidth = 1.0f;
104 104 } }
105 105 Multisample multisample; { Multisample multisample; {
106 106 multisample.rasterizationSamples = settings.multisampling; multisample.rasterizationSamples = settings.multisampling;
File src/graphics/draw_stages/offscreen/offscreen.h changed (mode: 100644) (index 5c34cc7..53d360b)
1 1 #pragma once #pragma once
2
3 #include "../../settings.h"
4 2 #include "../shaders.h" #include "../shaders.h"
5 #include "../descriptors.h"
6 3 #include "../clusters.h" #include "../clusters.h"
7 4 #include <math/frustum.h> #include <math/frustum.h>
8 #include <math/matrix.h>
5 #include <jen/resources.h>
6 #include <jen/settings.h>
9 7
10 namespace jen {
11 enum VAttr : uint8_t { POSITION, TEX_COORD, NORMAL, TEX_IND, TEX_SCALE };
12 constexpr static const uint8_t VATTR_TYPE_COUNT = 5;
13 using VAttrsOffsets = jl::array<vkw::DeviceSize, VATTR_TYPE_COUNT>;
14 }
15 8 namespace jen::vk { namespace jen::vk {
16 9 enum PipelineType : uint8_t { enum PipelineType : uint8_t {
17 10 GENERIC, GENERIC,
 
... ... namespace jen::vk::stages
129 122 void destroy(Device*, vkw::DescrPool); void destroy(Device*, vkw::DescrPool);
130 123
131 124 struct ShaderBuffer { math::m4f transform; }; struct ShaderBuffer { math::m4f transform; };
132 Descriptors::UniformDynamic ubd;
133 Descriptors::Textures textures;
125 DescriptorUniformDynamic ubd;
126 DescriptorTextureAllocator textures;
134 127 }; };
135 128
136 129
File src/graphics/draw_stages/pass_depthcube.cpp changed (mode: 100644) (index 189b052..4bb105b)
1 1 #include "pass_depthcube.h" #include "pass_depthcube.h"
2 #include "../../device/device.h"
2 3
3 [[nodiscard]] static jen::vk::Result
4 [[nodiscard]] static jen::Result
4 5 create_pass(vkw::Device device, create_pass(vkw::Device device,
5 6 VkFormat depth_format, VkFormat depth_format,
6 7 vkw::RenderPass *p_pass) vkw::RenderPass *p_pass)
 
... ... create_pass(vkw::Device device,
33 34 return p_pass->init(device, depth_info, subpass); return p_pass->init(device, depth_info, subpass);
34 35 } }
35 36
36 jen::vk::Result jen::vk::PassDepthCube::
37 [[nodiscard]] jen::Result jen::vk::PassDepthCube::
37 38 init(Device *p_dd, CmdData *p_cmds, const GraphicsSettings &options) init(Device *p_dd, CmdData *p_cmds, const GraphicsSettings &options)
38 39 { {
39 40 Result res; Result res;
 
... ... void jen::vk::PassDepthCube::destroy(Device *p_d) {
79 80 att_depth.destroy(p_d); att_depth.destroy(p_d);
80 81 } }
81 82
82 [[nodiscard]] jen::vk::Result
83 [[nodiscard]] jen::Result
83 84 create_pipeline_layout(vkw::Device d, vkw::DescrLayouts sl, create_pipeline_layout(vkw::Device d, vkw::DescrLayouts sl,
84 85 vkw::PipelineLayout *p_dst) vkw::PipelineLayout *p_dst)
85 86 { {
 
... ... create_pipeline_layout(vkw::Device d, vkw::DescrLayouts sl,
87 88 return p_dst->init(d, sl, range); return p_dst->init(d, sl, range);
88 89 } }
89 90
90 [[nodiscard]] jen::vk::Result
91 [[nodiscard]] jen::Result
91 92 create_pipeline( create_pipeline(
92 93 vkw::Device device, vkw::Device device,
93 94 vkw::RenderPass renderPass, vkw::RenderPass renderPass,
 
... ... const jen::GraphicsSettings &options)
196 197 return p_data->pipeline.init(device, info); return p_data->pipeline.init(device, info);
197 198 } }
198 199
199 [[nodiscard]] jen::vk::Result jen::vk::PipelineShadowOmni::
200 init(jen::vk::Device *p_dev,
200 [[nodiscard]] jen::Result jen::vk::PipelineShadowOmni::
201 init(jen::Device *p_dev,
201 202 jen::vk::PassDepthCube *p_pass, jen::vk::PassDepthCube *p_pass,
202 203 vkw::DescrPool pool, vkw::DescrPool pool,
203 204 const GraphicsSettings &options) const GraphicsSettings &options)
 
... ... void jen::vk::PipelineShadowOmni::destroy(Device *p_d, vkw::DescrPool pool) {
266 267 shaders.destroy(*p_d); shaders.destroy(*p_d);
267 268 } }
268 269
269 [[nodiscard]] jen::vk::Result create_pipeline_debug(
270 [[nodiscard]] jen::Result create_pipeline_debug(
270 271 vkw::Device device, vkw::Device device,
271 272 vkw::RenderPass renderPass, vkw::RenderPass renderPass,
272 273 jen::vk::PipelineDebugDepthCube *p_data, jen::vk::PipelineDebugDepthCube *p_data,
 
... ... jen::vk::PipelineDebugDepthCube *p_data,
356 357 return p_data->pipeline.init(device, info); return p_data->pipeline.init(device, info);
357 358 } }
358 359
359 [[nodiscard]] jen::vk::Result jen::vk::PipelineDebugDepthCube::
360 [[nodiscard]] jen::Result jen::vk::PipelineDebugDepthCube::
360 361 init(vkw::Device device, vkw::DescrLayout dc_layout, vkw::RenderPass rp, init(vkw::Device device, vkw::DescrLayout dc_layout, vkw::RenderPass rp,
361 362 vkw::Extent2D sc_extent) vkw::Extent2D sc_extent)
362 363 { {
File src/graphics/draw_stages/pass_depthcube.h changed (mode: 100644) (index 4e0a7c3..5138bca)
1 1 #pragma once #pragma once
2
3 2 #include "attachment.h" #include "attachment.h"
4 #include "../settings.h"
5 3 #include "shaders.h" #include "shaders.h"
6 #include "descriptors.h"
4 #include <jen/detail/descriptors.h>
7 5 #include <math/matrix.h> #include <math/matrix.h>
6 #include <jen/settings.h>
8 7
9 namespace jen::vk
8 namespace jen::vk {
9 struct PassDepthCube;
10 struct PipelineShadowOmni;
11 struct PipelineDebugDepthCube;
12 }
13 struct jen::vk::PassDepthCube
10 14 { {
11 struct PassDepthCube
12 {
13 [[nodiscard]] Result init(Device*, CmdData*, const GraphicsSettings&);
14 void destroy(Device*);
15
16 [[nodiscard]] VkClearValue CLEAR_VALUE() {
17 VkClearValue val;
18 val.depthStencil.depth = 1.0f;
19 val.depthStencil.stencil = 0;
20 return val;
21 }
22
23 Attachment att_depth;
24 uint32_t extent;
25 vkw::RenderPass render_pass;
26 vkw::Framebuffer framebuffer;
27 };
15 [[nodiscard]] Result init(Device*, CmdData*, const GraphicsSettings&);
16 void destroy(Device*);
28 17
29 struct PipelineShadowOmni
30 {
31 [[nodiscard]] Result
32 init(jen::vk::Device*, PassDepthCube*, vkw::DescrPool,
33 const GraphicsSettings&);
34 void destroy(Device*, vkw::DescrPool);
18 [[nodiscard]] VkClearValue CLEAR_VALUE() {
19 VkClearValue val;
20 val.depthStencil.depth = 1.0f;
21 val.depthStencil.stencil = 0;
22 return val;
23 }
35 24
36 struct LightData {
37 jl::array<math::m4f, 6> trans;
38 float z_far;
39 float z_near;
40 };
25 Attachment att_depth;
26 uint32_t extent;
27 vkw::RenderPass render_pass;
28 vkw::Framebuffer framebuffer;
29 };
30 struct jen::vk::PipelineShadowOmni
31 {
32 [[nodiscard]] Result
33 init(jen::Device*, PassDepthCube*, vkw::DescrPool,
34 const GraphicsSettings&);
35 void
36 destroy(Device*, vkw::DescrPool);
41 37
42 Descriptors::ImageView shadow_map_descriptor;
43 vkw::Sampler sampler;
44 Descriptors::UniformDynamic ubd;
45 vkw::Pipeline pipeline;
46 vkw::PipelineLayout layout;
38 struct LightData {
39 jl::array<math::m4f, 6> trans;
40 float z_far;
41 float z_near;
42 };
47 43
48 constexpr static const char SHADER_VERT[] {"shaders/shadow_cube_vert.spv"};
49 constexpr static const char SHADER_GEOM[] {"shaders/shadow_cube_geom.spv"};
50 constexpr static const char SHADER_FRAG[] {"shaders/shadow_cube_frag.spv"};
51 stages::Shaders<SHADER_VERT, SHADER_GEOM, SHADER_FRAG> shaders;
52 };
44 DescriptorImageView shadow_map_descriptor;
45 vkw::Sampler sampler;
46 DescriptorUniformDynamic ubd;
47 vkw::Pipeline pipeline;
48 vkw::PipelineLayout layout;
53 49
54 struct PipelineDebugDepthCube
55 {
56 [[nodiscard]] Result
57 init(vkw::Device, vkw::DescrLayout dc_layout, vkw::RenderPass,
58 vkw::Extent2D sc_extent);
59 void destroy(vkw::Device device);
50 constexpr static const char SHADER_VERT[] {"shaders/shadow_cube_vert.spv"};
51 constexpr static const char SHADER_GEOM[] {"shaders/shadow_cube_geom.spv"};
52 constexpr static const char SHADER_FRAG[] {"shaders/shadow_cube_frag.spv"};
53 stages::Shaders<SHADER_VERT, SHADER_GEOM, SHADER_FRAG> shaders;
54 };
55 struct jen::vk::PipelineDebugDepthCube
56 {
57 [[nodiscard]] Result
58 init(vkw::Device, vkw::DescrLayout dc_layout, vkw::RenderPass,
59 vkw::Extent2D sc_extent);
60 void destroy(vkw::Device device);
60 61
61 62
62 bool initialized;
63 vkw::Pipeline pipeline;
64 vkw::PipelineLayout layout;
63 bool initialized;
64 vkw::Pipeline pipeline;
65 vkw::PipelineLayout layout;
65 66
66 constexpr static const char S_DDC_VERT[]
67 = "shaders/debug_depth_cube_vert.spv";
68 constexpr static const char S_DDC_FRAG[]
69 = "shaders/debug_depth_cube_frag.spv";
67 constexpr static const char S_DDC_VERT[]
68 = "shaders/debug_depth_cube_vert.spv";
69 constexpr static const char S_DDC_FRAG[]
70 = "shaders/debug_depth_cube_frag.spv";
70 71
71 stages::Shaders<S_DDC_VERT, S_DDC_FRAG> shaders;
72 };
73 }
72 stages::Shaders<S_DDC_VERT, S_DDC_FRAG> shaders;
73 };
File src/graphics/draw_stages/pass_main.cpp changed (mode: 100644) (index 9429659..edaaaee)
1 1 #include "pass_main.h" #include "pass_main.h"
2 2
3 [[nodiscard]] jen::vk::Result jen::vk::PassMain::Attachments::
3 [[nodiscard]] jen::Result jen::vk::PassMain::Attachments::
4 4 init(Device *p_dev, CmdData *p_cmds, init(Device *p_dev, CmdData *p_cmds,
5 5 vkw::Extent2D extent, vkw::Samples sample_count) vkw::Extent2D extent, vkw::Samples sample_count)
6 6 { {
 
... ... namespace jen::vk
227 227 } }
228 228 } }
229 229
230 [[nodiscard]] jen::vk::Result jen::vk::PassMain::Framebuffer::
230 [[nodiscard]] jen::Result jen::vk::PassMain::Framebuffer::
231 231 init( vkw::Device device, init( vkw::Device device,
232 232 vkw::RenderPass renderPass_, vkw::RenderPass renderPass_,
233 233 vkw::Image image_, vkw::Image image_,
 
... ... jen::vk::PassMain::Framebuffer::destroy(vkw::Device device)
272 272 } }
273 273 } }
274 274
275 [[nodiscard]] jen::vk::Result jen::vk::PassMain::
275 [[nodiscard]] jen::Result jen::vk::PassMain::
276 276 init(Device *p_dev, CmdData *p_cmds, const SwapChainData &sc, init(Device *p_dev, CmdData *p_cmds, const SwapChainData &sc,
277 277 vkw::Samples sampleCount vkw::Samples sampleCount
278 278 ){ ){
File src/graphics/draw_stages/swap_chain.cpp changed (mode: 100644) (index 851da55..c5f417f)
1 1 #include "swap_chain.h" #include "swap_chain.h"
2 2
3 [[nodiscard]] jen::vk::Result jen::vk::SC_SupportDetails::
3 [[nodiscard]] jen::Result jen::vk::SC_SupportDetails::
4 4 init(vkw::DevicePhysical pd, vkw::Surface s) init(vkw::DevicePhysical pd, vkw::Surface s)
5 5 { {
6 6 Result res; Result res;
 
... ... C_S: surfaceFormats.destroy();
21 21 return res; return res;
22 22 } }
23 23
24 [[nodiscard]] jen::vk::Result jen::vk::SC_SupportDetails::
24 [[nodiscard]] jen::Result jen::vk::SC_SupportDetails::
25 25 update(vkw::DevicePhysical pd, vkw::Surface s) { update(vkw::DevicePhysical pd, vkw::Surface s) {
26 26 return pd.surface_capabilities(s, &surfaceCapabilities); return pd.surface_capabilities(s, &surfaceCapabilities);
27 27 } }
 
... ... chooseMode(bool sync, bool wait_monitor) {
95 95 return best; return best;
96 96 } }
97 97
98 [[nodiscard]] jen::vk::Result jen::vk::SwapChainData::
98 [[nodiscard]] jen::Result jen::vk::SwapChainData::
99 99 init_static(vkw::Surface surface, Device *p_dev) { init_static(vkw::Surface surface, Device *p_dev) {
100 100 Result res; Result res;
101 101 res = supportDetails.init(p_dev->physical, surface); res = supportDetails.init(p_dev->physical, surface);
 
... ... init_static(vkw::Surface surface, Device *p_dev) {
108 108 void jen::vk::SwapChainData::destroy_static() { void jen::vk::SwapChainData::destroy_static() {
109 109 supportDetails.destroy(); supportDetails.destroy();
110 110 } }
111 [[nodiscard]] jen::vk::Result jen::vk::SwapChainData::
111 [[nodiscard]] jen::Result jen::vk::SwapChainData::
112 112 init_updatable(math::v2i32 framebuffer_extent, vkw::Surface surface, init_updatable(math::v2i32 framebuffer_extent, vkw::Surface surface,
113 113 Device *p_dev, bool vSync, bool wait_monitor) Device *p_dev, bool vSync, bool wait_monitor)
114 114 { {
 
... ... CANCEL_SC: swapChain.destroy(*p_dev);
149 149 CANCEL_RP: swapChain.set_null(); CANCEL_RP: swapChain.set_null();
150 150 return res; return res;
151 151 } }
152 [[nodiscard]] jen::vk::Result jen::vk::SwapChainData::
152 [[nodiscard]] jen::Result jen::vk::SwapChainData::
153 153 init(math::v2i32 framebuffer_extent, vkw::Surface surface, init(math::v2i32 framebuffer_extent, vkw::Surface surface,
154 154 Device *p_dev, bool vSync, bool wait_monitor) Device *p_dev, bool vSync, bool wait_monitor)
155 155 { {
 
... ... init(math::v2i32 framebuffer_extent, vkw::Surface surface,
164 164 is_ready = true; is_ready = true;
165 165 return res; return res;
166 166 } }
167 [[nodiscard]] jen::vk::Result jen::vk::SwapChainData::
167 [[nodiscard]] jen::Result jen::vk::SwapChainData::
168 168 update(math::v2i32 framebuffer_extent, vkw::Surface surface, update(math::v2i32 framebuffer_extent, vkw::Surface surface,
169 169 Device *p_dev, bool vSync, bool wait_monitor) Device *p_dev, bool vSync, bool wait_monitor)
170 170 { {
File src/graphics/gpu_transfer/data.cpp changed (mode: 100644) (index 8b4a981..a8ee407)
1 1 #include "data.h" #include "data.h"
2 2
3 [[nodiscard]] jen::vk::Result jen::vk::TransferData::
4 init(Device *p_d, Descriptors::Textures *p_dt)
3 [[nodiscard]] jen::Result jen::vk::TransferData::
4 init(Device *p_dev, DescriptorTextureAllocator *p_da)
5 5 { {
6 6 Result result; Result result;
7 p_dev = p_d;
8 p_descriptors = p_dt;
7 this->p_dev = p_dev;
8 this->p_da = p_da;
9 9 auto &device = p_dev->device; auto &device = p_dev->device;
10 10
11 11 result = cmds_transfer.init(device, p_dev->queue_indices.transfer.family, result = cmds_transfer.init(device, p_dev->queue_indices.transfer.family,
 
... ... void jen::vk::TransferData::destroy()
56 56 } }
57 57
58 58
59 [[nodiscard]] jen::vk::Result jen::vk::TransferData::
59 [[nodiscard]] jen::Result jen::vk::TransferData::
60 60 get_staging(vkw::DeviceSize size, uint32_t *p_i) get_staging(vkw::DeviceSize size, uint32_t *p_i)
61 61 { {
62 62 Result res; Result res;
 
... ... WAIT_FENCES:
121 121 } }
122 122
123 123
124 [[nodiscard]] jen::vk::Result
124 [[nodiscard]] jen::Result
125 125 jen::vk::TransferData::write_data(GpuData *p_data) { jen::vk::TransferData::write_data(GpuData *p_data) {
126 126 uint32_t i; uint32_t i;
127 127 auto &size = p_data->source.size; auto &size = p_data->source.size;
 
... ... jen::vk::TransferData::write_data(GpuData *p_data) {
158 158 return res; return res;
159 159 } }
160 160
161 [[nodiscard]] jen::vk::Result jen::vk::TransferData::
161 [[nodiscard]] jen::Result jen::vk::TransferData::
162 162 write_texture(GpuTexture *p_texture) write_texture(GpuTexture *p_texture)
163 163 { {
164 164 auto &source = p_texture->source; auto &source = p_texture->source;
165 165
166 166 uint32_t i; uint32_t i;
167 167 vkw::DeviceSize texture_size = source.extent.pixel_count() * 4; vkw::DeviceSize texture_size = source.extent.pixel_count() * 4;
168 jen::vk::Result res = get_staging(texture_size, &i);
168 jen::Result res = get_staging(texture_size, &i);
169 169 if (res != VK_SUCCESS) if (res != VK_SUCCESS)
170 170 return res; return res;
171 171
File src/graphics/gpu_transfer/data.h changed (mode: 100644) (index 358fb67..16c824a)
1 1 #pragma once #pragma once
2
3 #include "../resources/texture.h"
4 #include "../resources/data.h"
5 #include "../../device/cmd_container.h"
2 #include "../resources.h"
3 #include <jen/detail/cmd_container.h>
4 #include "../../device/device.h"
6 5
7 6 namespace jen::vk namespace jen::vk
8 7 { {
 
... ... namespace jen::vk
57 56
58 57 struct TransferData struct TransferData
59 58 { {
60 [[nodiscard]] Result init(Device*,Descriptors::Textures*);
59 [[nodiscard]] Result init(Device*, DescriptorTextureAllocator*);
61 60 void destroy(); void destroy();
62 61
63 62 void destroy(GpuData *p_data) { void destroy(GpuData *p_data) {
 
... ... namespace jen::vk
66 65 jl::deallocate(&p_data); jl::deallocate(&p_data);
67 66 } }
68 67 void destroy(GpuTexture *p_texture) { void destroy(GpuTexture *p_texture) {
69 p_descriptors->destroy(*p_dev, p_texture->descriptor);
68 p_da->destroy(*p_dev, p_texture->descriptor);
70 69 p_texture->gpu_im.destroy(p_dev); p_texture->gpu_im.destroy(p_dev);
71 70 p_texture->destroy_source_if_allowed(); p_texture->destroy_source_if_allowed();
72 71 jl::deallocate(&p_texture); jl::deallocate(&p_texture);
 
... ... namespace jen::vk
93 92 return VK_SUCCESS; return VK_SUCCESS;
94 93 } }
95 94
96 Device *p_dev;
97 Descriptors::Textures *p_descriptors;
95 Device *p_dev;
96 DescriptorTextureAllocator *p_da;
98 97
99 98 private: private:
100 99
File src/graphics/gpu_transfer/gpu_transfer.cpp changed (mode: 100644) (index 3a81e5c..d9d5547)
1 1 #include "gpu_transfer.h" #include "gpu_transfer.h"
2 2
3 3 void gpuTransferDestroy(jen::vk::GpuTransfer &gpuTransfer) { void gpuTransferDestroy(jen::vk::GpuTransfer &gpuTransfer) {
4 gpuTransfer.queues.destroy(&gpuTransfer.data);
5 gpuTransfer.data.destroy();
6 gpuTransfer.condition.destroy();
4 gpuTransfer.queues.destroy(&gpuTransfer.data);
5 gpuTransfer.data.destroy();
6 gpuTransfer.condition.destroy();
7 7 } }
8 8
9 9 int gpuTransferLoop(jen::vk::GpuTransfer *p_gpuTransfer) int gpuTransferLoop(jen::vk::GpuTransfer *p_gpuTransfer)
10 10 { {
11 auto &gpuTransfer = *p_gpuTransfer;
12 auto &queues = gpuTransfer.queues;
13
14 jen::GpuData *p_data = nullptr;
15 jen::GpuTexture *p_texture = nullptr;
16
17 while(gpuTransfer.is_needed)
18 {
19 bool data_exist;
20 bool tex_exist;
21
22 auto take_data = [&]() {
23 queues.clean_remove_array(&gpuTransfer.data);
24 data_exist = queues.data.remove_first_if_exist(&p_data);
25 tex_exist = queues.textures.remove_first_if_exist(&p_texture);
26 };
27 queues.lock.lock();
28 for(;;) {
29 take_data();
30 if (data_exist or tex_exist)
31 break;
32
33 queues.lock.unlock();
34 (void)gpuTransfer.data.wait_and_complete_reources();
35 queues.lock.lock();
36
37 take_data();
38 if (data_exist or tex_exist)
39 break;
40
41 if (not gpuTransfer.is_needed) {
42 queues.lock.unlock();
43 goto END_OF_WORK;
44 }
45
46 queues.condition.wait(&queues.lock);
47 }
48 queues.lock.unlock();
49
50 for (;data_exist;) {
51 auto result = gpuTransfer.data.write_data(p_data);
52 if (result == VK_SUCCESS)
53 break;
54 if (result == VK_ERROR_DEVICE_LOST)
55 goto DEVICE_LOST;
56
57 queues.lock.lock();
58 queues.condition.wait(&queues.lock);
59 queues.lock.unlock();
60 }
61
62 for (;tex_exist;) {
63 auto result = gpuTransfer.data.write_texture(p_texture);
64 if (result == VK_SUCCESS)
65 break;
66 if (result == VK_ERROR_DEVICE_LOST)
67 goto DEVICE_LOST;
68
69 queues.lock.lock();
70 queues.condition.wait(&queues.lock);
71 queues.lock.unlock();
72 }
73 }
74 END_OF_WORK:
75 gpuTransferDestroy(gpuTransfer);
76 return 0;
11 auto &gpuTransfer = *p_gpuTransfer;
12 auto &queues = gpuTransfer.queues;
13
14 jen::GpuData *p_data = nullptr;
15 jen::GpuTexture *p_texture = nullptr;
16
17 while(gpuTransfer.is_needed)
18 {
19 bool data_exist;
20 bool tex_exist;
21
22 auto take_data = [&]() {
23 queues.clean_remove_array(&gpuTransfer.data);
24 data_exist = queues.data.remove_first_if_exist(&p_data);
25 tex_exist = queues.textures.remove_first_if_exist(&p_texture);
26 };
27 queues.lock.lock();
28 for(;;) {
29 take_data();
30 if (data_exist or tex_exist)
31 break;
32
33 queues.lock.unlock();
34 (void)gpuTransfer.data.wait_and_complete_reources();
35 queues.lock.lock();
36
37 take_data();
38 if (data_exist or tex_exist)
39 break;
40
41 if (not gpuTransfer.is_needed) {
42 queues.lock.unlock();
43 goto END_OF_WORK;
44 }
45
46 queues.condition.wait(&queues.lock);
47 }
48 queues.lock.unlock();
49
50 for (;data_exist;) {
51 auto result = gpuTransfer.data.write_data(p_data);
52 if (result == VK_SUCCESS)
53 break;
54 if (result == VK_ERROR_DEVICE_LOST)
55 goto DEVICE_LOST;
56
57 queues.lock.lock();
58 queues.condition.wait(&queues.lock);
59 queues.lock.unlock();
60 }
61
62 for (;tex_exist;) {
63 auto result = gpuTransfer.data.write_texture(p_texture);
64 if (result == VK_SUCCESS)
65 break;
66 if (result == VK_ERROR_DEVICE_LOST)
67 goto DEVICE_LOST;
68
69 queues.lock.lock();
70 queues.condition.wait(&queues.lock);
71 queues.lock.unlock();
72 }
73 }
74 END_OF_WORK:
75 gpuTransferDestroy(gpuTransfer);
76 return 0;
77 77
78 78 DEVICE_LOST: DEVICE_LOST:
79 jassert_soft_release(false,"device lost in gpu transfer");
80 while (gpuTransfer.is_needed) {
81 queues.lock.lock();
82 queues.condition.wait(&queues.lock);
83 queues.lock.unlock();
84 }
85
86 if (p_texture != nullptr) {
87 p_texture->destroy_source_if_allowed();
88 jl::deallocate(&p_texture);
89 }
90 if (p_data != nullptr) {
91 p_data->destroy_source_if_allowed();
92 jl::deallocate(&p_data);
93 }
94 return -1;
79 jassert_soft_release(false,"device lost in gpu transfer");
80 while (gpuTransfer.is_needed) {
81 queues.lock.lock();
82 queues.condition.wait(&queues.lock);
83 queues.lock.unlock();
84 }
85
86 if (p_texture != nullptr) {
87 p_texture->destroy_source_if_allowed();
88 jl::deallocate(&p_texture);
89 }
90 if (p_data != nullptr) {
91 p_data->destroy_source_if_allowed();
92 jl::deallocate(&p_data);
93 }
94 return -1;
95 95 } }
96 96
97 [[nodiscard]] jen::vk::Result jen::vk::GpuTransfer::
98 init(Device *p_dev, Descriptors::Textures *p_des)
97 [[nodiscard]] jen::Result jen::vk::GpuTransfer::
98 init(Device *p_dev, DescriptorTextureAllocator *p_da)
99 99 { {
100 is_needed = true;
100 is_needed = true;
101 101
102 Result result = data.init(p_dev, p_des);
103 if (result != VK_SUCCESS)
104 return result;
102 Result result = data.init(p_dev, p_da);
103 if (result != VK_SUCCESS)
104 return result;
105 105
106 if (not queues.init())
107 goto DATA;
106 if (not queues.init())
107 goto DATA;
108 108
109 if (not thread.run_joinable<GpuTransfer, gpuTransferLoop>(this))
110 goto QUEUES;
109 if (not thread.run_joinable<GpuTransfer, gpuTransferLoop>(this))
110 goto QUEUES;
111 111
112 condition.init();
112 condition.init();
113 113
114 return VK_SUCCESS;
114 return VK_SUCCESS;
115 115
116 116 QUEUES: queues.destroy(&data); QUEUES: queues.destroy(&data);
117 117 DATA: data.destroy(); DATA: data.destroy();
118 return VK_ERROR_OUT_OF_HOST_MEMORY;
118 return VK_ERROR_OUT_OF_HOST_MEMORY;
119 119 } }
120 120
121 121
122 122 [[nodiscard]] VkFormat choose_image_format(jrf::Image::Format format) [[nodiscard]] VkFormat choose_image_format(jrf::Image::Format format)
123 123 { {
124 switch (format) {
125 case jrf::Image::Format::B8G8R8_SRGB: return VK_FORMAT_B8G8R8A8_SRGB;
126 case jrf::Image::Format::B8G8R8A8_SRGB: return VK_FORMAT_B8G8R8A8_SRGB;
127 case jrf::Image::Format::R8G8B8A8_SRGB: return VK_FORMAT_R8G8B8A8_SRGB;
128 default: return VK_FORMAT_UNDEFINED;
129 }
124 switch (format) {
125 case jrf::Image::Format::B8G8R8_SRGB: return VK_FORMAT_B8G8R8A8_SRGB;
126 case jrf::Image::Format::B8G8R8A8_SRGB: return VK_FORMAT_B8G8R8A8_SRGB;
127 case jrf::Image::Format::R8G8B8A8_SRGB: return VK_FORMAT_R8G8B8A8_SRGB;
128 default: return VK_FORMAT_UNDEFINED;
129 }
130 130 } }
131 131
132 132 [[nodiscard]] vkw::Result jen::vk::GpuTransfer:: [[nodiscard]] vkw::Result jen::vk::GpuTransfer::
133 133 submit(Priority priority, GpuTexture *p_t, vkw::Sampler sampler) submit(Priority priority, GpuTexture *p_t, vkw::Sampler sampler)
134 134 { {
135 auto &src = p_t->source;
136
137 ImageInfo ci; {
138 ci.extent = {src.extent.width, src.extent.height, 1};
139 ci.layer_count = src.extent.depth;
140 ci.mip_level_count = p_t->mip_levels;
141 ci.format = choose_image_format(src.format);
142 if (ci.format == VK_FORMAT_UNDEFINED) {
143 fprintf(stderr,
144 "GpuTransfer::createTexture - unsupported src texture format\n");
145 return VK_ERROR_FORMAT_NOT_SUPPORTED;
146 }
147 ci.samples = 1;
148 ci.usage = vkw::ImUsage::TRANSFER_DST | vkw::ImUsage::TRANSFER_SRC
149 | vkw::ImUsage::SAMPLED;
150 ci.flags = {};
151 ci.type = vkw::ImType::_2D;
152 ci.tiling = vkw::Tiling::OPTIMAL;
153 }
154 ViewInfo vi; {
155 vi.type = vkw::ImViewType::_2D_ARRAY;
156 vi.aspect = vkw::ImAspect::COLOR;
157 }
158 Result result = p_t->gpu_im.init(data.p_dev, &ci, &vi);
159 if (result != VK_SUCCESS)
160 return result;
161
162 result = data.p_descriptors->create(data.p_dev->device, sampler,
163 p_t->gpu_im.view, &p_t->descriptor);
164 if (result != VK_SUCCESS)
165 goto CANCEL_IMAGE;
166
167 queues.lock.lock();
168 switch (priority)
169 {
170 case Priority::LOW: if (not queues.textures.insert_to_end(p_t))
171 goto CANCEL_DESCR;
172 else break;
173
174 case Priority::HIGH: if (queues.textures.insert_to_begin(p_t))
175 goto CANCEL_DESCR;
176 else break;
177 default: jabort_debug("invalid GpuTransfer::Priority");
178 }
179 queues.condition.wake_up_thread();
180 queues.lock.unlock();
181 return VK_SUCCESS;
135 auto &src = p_t->source;
136
137 GpuImageInfo ci; {
138 ci.extent = {src.extent.width, src.extent.height, 1};
139 ci.layer_count = src.extent.depth;
140 ci.mip_level_count = p_t->mip_levels;
141 ci.format = choose_image_format(src.format);
142 if (ci.format == VK_FORMAT_UNDEFINED) {
143 fprintf(stderr,
144 "GpuTransfer::createTexture - unsupported src texture format\n");
145 return VK_ERROR_FORMAT_NOT_SUPPORTED;
146 }
147 ci.samples = 1;
148 ci.usage = vkw::ImUsage::TRANSFER_DST | vkw::ImUsage::TRANSFER_SRC
149 | vkw::ImUsage::SAMPLED;
150 ci.flags = {};
151 ci.type = vkw::ImType::_2D;
152 ci.tiling = vkw::Tiling::OPTIMAL;
153 }
154 GpuImageViewInfo vi; {
155 vi.type = vkw::ImViewType::_2D_ARRAY;
156 vi.aspect = vkw::ImAspect::COLOR;
157 }
158 Result result = p_t->gpu_im.init(data.p_dev, &ci, &vi);
159 if (result != VK_SUCCESS)
160 return result;
161
162 result = data.p_da->create(data.p_dev->device, sampler,
163 p_t->gpu_im.view, &p_t->descriptor);
164 if (result != VK_SUCCESS)
165 goto CANCEL_IMAGE;
166
167 queues.lock.lock();
168 switch (priority)
169 {
170 case Priority::LOW: if (not queues.textures.insert_to_end(p_t))
171 goto CANCEL_DESCR;
172 else break;
173
174 case Priority::HIGH: if (queues.textures.insert_to_begin(p_t))
175 goto CANCEL_DESCR;
176 else break;
177 default: jabort_debug("invalid GpuTransfer::Priority");
178 }
179 queues.condition.wake_up_thread();
180 queues.lock.unlock();
181 return VK_SUCCESS;
182 182
183 183 CANCEL_DESCR: CANCEL_DESCR:
184 queues.lock.unlock();
185 result = VK_ERROR_OUT_OF_HOST_MEMORY;
186 data.p_descriptors->destroy(*data.p_dev, p_t->descriptor);
184 queues.lock.unlock();
185 result = VK_ERROR_OUT_OF_HOST_MEMORY;
186 data.p_da->destroy(*data.p_dev, p_t->descriptor);
187 187 CANCEL_IMAGE: CANCEL_IMAGE:
188 p_t->gpu_im.destroy(data.p_dev);
189 return result;
188 p_t->gpu_im.destroy(data.p_dev);
189 return result;
190 190 } }
191 191
192 [[nodiscard]] jen::vk::Result jen::vk::GpuTransfer::
192 [[nodiscard]] jen::Result jen::vk::GpuTransfer::
193 193 submit(Priority priority, GpuData *p_data) submit(Priority priority, GpuData *p_data)
194 194 { {
195 uint32_t buse = vkw::BufferUsage::TRANSFER_DST
196 | vkw::BufferUsage::VERTEX | vkw::BufferUsage::INDEX;
197
198 Result res;
199 res = data.p_dev->buffer_allocator
200 .allocate(p_data->source.size, 0, DevMemUsage::STATIC, buse, true,
201 &p_data->allocation);
202 if (res != VK_SUCCESS)
203 return res;
204
205 if (not p_data->allocation.is_mapped())
206 {
207 queues.lock.lock();
208 switch (priority) {
209 case Priority::LOW:
210 if (not queues.data.insert_to_end (p_data))
211 goto C_ENOMEM;
212 break;
213 case Priority::HIGH:
214 if (not queues.data.insert_to_begin(p_data))
215 goto C_ENOMEM;
216 break;
217 default: jabort_debug("invalid GpuTransfer::Priority");
218 }
219 queues.condition.wake_up_thread();
220 queues.lock.unlock();
221 return VK_SUCCESS;
195 uint32_t buse = vkw::BufferUsage::TRANSFER_DST
196 | vkw::BufferUsage::VERTEX | vkw::BufferUsage::INDEX;
197
198 Result res;
199 res = data.p_dev->buffer_allocator
200 .allocate(p_data->source.size, 0, DevMemUsage::STATIC, buse, true,
201 &p_data->allocation);
202 if (res != VK_SUCCESS)
203 return res;
204
205 if (not p_data->allocation.is_mapped())
206 {
207 queues.lock.lock();
208 switch (priority) {
209 case Priority::LOW:
210 if (not queues.data.insert_to_end (p_data))
211 goto C_ENOMEM;
212 break;
213 case Priority::HIGH:
214 if (not queues.data.insert_to_begin(p_data))
215 goto C_ENOMEM;
216 break;
217 default: jabort_debug("invalid GpuTransfer::Priority");
218 }
219 queues.condition.wake_up_thread();
220 queues.lock.unlock();
221 return VK_SUCCESS;
222 222
223 223 C_ENOMEM: C_ENOMEM:
224 res = VK_ERROR_OUT_OF_HOST_MEMORY;
225 data.p_dev->buffer_allocator.deallocate(p_data->allocation);
226 return res;
227 }
228 else
229 {
230 memcpy(p_data->allocation.p_data(), p_data->source.p, p_data->source.size);
231 p_data->state = ResourceState::DONE;
232 p_data->destroy_source_if_allowed();
233 //TODO flush
234 jassert_soft_release(not p_data->allocation.is_flush_needed(),
235 "unimplemented flushing");
236 return VK_SUCCESS;
237 }
224 res = VK_ERROR_OUT_OF_HOST_MEMORY;
225 data.p_dev->buffer_allocator.deallocate(p_data->allocation);
226 return res;
227 }
228 else
229 {
230 memcpy(p_data->allocation.p_data(), p_data->source.p, p_data->source.size);
231 p_data->state = ResourceState::DONE;
232 p_data->destroy_source_if_allowed();
233 //TODO flush
234 jassert_soft_release(not p_data->allocation.is_flush_needed(),
235 "unimplemented flushing");
236 return VK_SUCCESS;
237 }
238 238 } }
239 239
240 240 template<typename Resource> template<typename Resource>
 
... ... void destroy_res(jen::vk::GpuTransfer *p_gt, Resource *p_resource)
255 255 p_gt->queues.lock.unlock(); p_gt->queues.lock.unlock();
256 256 } }
257 257
258 void jen::vk::GpuTransfer::destroy(GpuData *p_resource)
259 { destroy_res(this, p_resource); }
260 void jen::vk::GpuTransfer::destroy(GpuTexture *p_resource)
261 { destroy_res(this, p_resource); }
258 void jen::vk::GpuTransfer::destroy(GpuData *p_resource) {
259 destroy_res(this, p_resource);
260 }
261 void jen::vk::GpuTransfer::destroy(GpuTexture *p_resource) {
262 destroy_res(this, p_resource);
263 }
File src/graphics/gpu_transfer/gpu_transfer.h changed (mode: 100644) (index d2cf141..cf6f4b4)
1 1 #pragma once #pragma once
2
3 2 #include "queues.h" #include "queues.h"
4
5 #include "../draw_stages/descriptors.h"
3 #include <jen/detail/descriptors.h>
6 4
7 5 namespace jen::vk namespace jen::vk
8 6 { {
9 struct GpuTransfer
10 {
11 enum class Priority { LOW, HIGH };
12
13 [[nodiscard]] Result init(Device*, Descriptors::Textures*);
14
15 void send_destroy_signal() {
16 queues.lock.lock();
17 is_needed = false;
18 queues.condition.wake_up_threads();
19 queues.lock.unlock();
20 }
21 void join() {
22 send_destroy_signal();
23 (void)thread.join();
24 }
25
26 [[nodiscard]] Result
27 submit(Priority, GpuData *p_data );
28 [[nodiscard]] Result
29 submit(Priority, GpuTexture *p_texture, vkw::Sampler sampler);
30
31 void destroy(GpuData *p_resource);
32 void destroy(GpuTexture *p_resource);
33
34 TransferQueues queues;
35 TransferData data;
36 jth::Thread<int> thread;
37 jth::Condition condition;
38 volatile bool is_needed;
39 };
7 struct GpuTransfer
8 {
9 enum class Priority { LOW, HIGH };
10
11 [[nodiscard]] Result init(Device*, DescriptorTextureAllocator*);
12
13 void send_destroy_signal() {
14 queues.lock.lock();
15 is_needed = false;
16 queues.condition.wake_up_threads();
17 queues.lock.unlock();
18 }
19 void join() {
20 send_destroy_signal();
21 (void)thread.join();
22 }
23
24 [[nodiscard]] Result
25 submit(Priority, GpuData *p_data );
26 [[nodiscard]] Result
27 submit(Priority, GpuTexture *p_texture, vkw::Sampler sampler);
28
29 void destroy(GpuData *p_resource);
30 void destroy(GpuTexture *p_resource);
31
32 TransferQueues queues;
33 TransferData data;
34 jth::Thread<int> thread;
35 jth::Condition condition;
36 volatile bool is_needed;
37 };
40 38 } }
File src/graphics/graphics.cpp changed (mode: 100644) (index a8a87aa..e7e0ff1)
1 1 #include "graphics.h" #include "graphics.h"
2 2 #include "debug_overlay.h" #include "debug_overlay.h"
3 #include "../instance/instance.h"
3 4
4 5 using namespace jen; using namespace jen;
5 6 using namespace jen::vk; using namespace jen::vk;
6 7
7 void cmd_transfer_clusters(ModuleGraphics *p_mg, vkw::CmdBuffer cmd) {
8 auto &cb = p_mg->stages.clusters_buffer;
8 void cmd_transfer_clusters(GraphicsData *p_g, vkw::CmdBuffer cmd) {
9 auto &cb = p_g->stages.clusters_buffer;
9 10 using namespace vk::clusters; using namespace vk::clusters;
10 vkw::DeviceSize offset = p_mg->cmd_data.frame_index * BUFFER_SIZE;
11 vkw::DeviceSize offset = p_g->cmd_data.frame_index * BUFFER_SIZE;
11 12
12 13 auto &m = cb.buffer; auto &m = cb.buffer;
13 14 vkw::BufferChange bs = {cb.staging_buffer.buffer, cb.buffer.buffer}; vkw::BufferChange bs = {cb.staging_buffer.buffer, cb.buffer.buffer};
 
... ... void cmd_transfer_clusters(ModuleGraphics *p_mg, vkw::CmdBuffer cmd) {
36 37 return vkw::flush_memory(d, rs); return vkw::flush_memory(d, rs);
37 38 */ */
38 39 } }
39 void cmd_transfer_font_atlas(ModuleGraphics *p_mg, vkw::CmdBuffer cmd) {
40 auto &stages = p_mg->stages;
41 auto &text_data = p_mg->text_data;
40 void cmd_transfer_font_atlas(GraphicsData *p_g, vkw::CmdBuffer cmd) {
41 auto &stages = p_g->stages;
42 auto &text_data = p_g->text_data;
42 43
43 44 vkw::BarrierImMem barrier; { vkw::BarrierImMem barrier; {
44 45 barrier.access_change.src = {}; barrier.access_change.src = {};
 
... ... draw_mesh(vkw::CmdBuffer cmd, const jen::Model &model,
103 104 { {
104 105 auto &all = ind.p_data->allocation; auto &all = ind.p_data->allocation;
105 106 auto off = all.offset() + ind.offset; auto off = all.offset() + ind.offset;
106 cmd.cmd_set_index_buffer(all.buffer, off, ind.type);
107 cmd.cmd_set_index_buffer(all.buffer, off, vkw::IndexType(ind.type));
107 108 cmd.cmd_draw_indexed(ind.count); cmd.cmd_draw_indexed(ind.count);
108 109 } }
109 110 } }
110 void cmd_primary_shadow_map_models(ModuleGraphics *p_mg, vkw::CmdBuffer cmd)
111 void cmd_primary_shadow_map_models(GraphicsData *p_g, vkw::CmdBuffer cmd)
111 112 { {
112 auto &stages = p_mg->stages;
113 auto &cmd_data = p_mg->cmd_data;
114 auto &draw_data = p_mg->draw_data;
113 auto &stages = p_g->stages;
114 auto &cmd_data = p_g->cmd_data;
115 auto &draw_data = p_g->draw_data;
115 116
116 117 const PipelineShadowOmni &stage_sho = stages.pipeline_depth_cube; const PipelineShadowOmni &stage_sho = stages.pipeline_depth_cube;
117 118
 
... ... void cmd_primary_shadow_map_models(ModuleGraphics *p_mg, vkw::CmdBuffer cmd)
173 174 cmd.cmd_event_set_signaled(cmd_data.event<EventId::SHADOW_MAP>(), cmd.cmd_event_set_signaled(cmd_data.event<EventId::SHADOW_MAP>(),
174 175 vkw::StageFlag::LATE_FRAGMENT_TESTS); vkw::StageFlag::LATE_FRAGMENT_TESTS);
175 176 } }
176 void cmd_secondary_models(ModuleGraphics *p_mg, vkw::CmdBuffer cmd)
177 void cmd_secondary_models(GraphicsData *p_g, vkw::CmdBuffer cmd)
177 178 { {
178 auto &stages = p_mg->stages;
179 auto &cmd_data = p_mg->cmd_data;
180 auto &draw_data = p_mg->draw_data;
179 auto &stages = p_g->stages;
180 auto &cmd_data = p_g->cmd_data;
181 auto &draw_data = p_g->draw_data;
181 182 const auto &stage_sho = stages.pipeline_depth_cube; const auto &stage_sho = stages.pipeline_depth_cube;
182 bool write_normals = p_mg->settings.is_debug_normals_visible;
183 bool write_normals = p_g->settings.is_debug_normals_visible;
183 184
184 185 const stages::Offscreen &data = stages.stages.offscreen; const stages::Offscreen &data = stages.stages.offscreen;
185 186 jl::array<vkw::DescrSet, 4> descriptors; { jl::array<vkw::DescrSet, 4> descriptors; {
 
... ... REPEAT_MODELS:
233 234 goto REPEAT_MODELS; goto REPEAT_MODELS;
234 235 } }
235 236 } }
236 void cmd_secondary_composition(ModuleGraphics *p_mg, vkw::CmdBuffer cmd) {
237 auto &stage = p_mg->stages.stages.composition;
237 void cmd_secondary_composition(GraphicsData *p_g, vkw::CmdBuffer cmd) {
238 auto &stage = p_g->stages.stages.composition;
238 239
239 240 cmd.cmd_set_pipeline(stage.pipeline,vkw::BindPoint::GRAPHICS); cmd.cmd_set_pipeline(stage.pipeline,vkw::BindPoint::GRAPHICS);
240 241 cmd.cmd_set_descr_sets(vkw::BindPoint::GRAPHICS, stage.pipelineLayout, cmd.cmd_set_descr_sets(vkw::BindPoint::GRAPHICS, stage.pipelineLayout,
241 242 stage.descriptor.set); stage.descriptor.set);
242 243 cmd.cmd_draw(3); cmd.cmd_draw(3);
243 244 } }
244 void cmd_secondary_texts(ModuleGraphics *p_mg, vkw::CmdBuffer cmd) {
245 auto &stages = p_mg->stages;
245 void cmd_secondary_texts(GraphicsData *p_g, vkw::CmdBuffer cmd) {
246 auto &stages = p_g->stages;
246 247 auto &fonts = stages.stages.fonts; auto &fonts = stages.stages.fonts;
247 auto &text_data = p_mg->text_data;
248 text_data.clean_destroy_marked(p_mg->cmd_data.frame_index);
248 auto &text_data = p_g->text_data;
249 text_data.clean_destroy_marked(p_g->cmd_data.frame_index);
249 250
250 251 cmd.cmd_set_pipeline(fonts.pipeline, vkw::BindPoint::GRAPHICS); cmd.cmd_set_pipeline(fonts.pipeline, vkw::BindPoint::GRAPHICS);
251 252 cmd.cmd_set_descr_sets(vkw::BindPoint::GRAPHICS, fonts.layout, cmd.cmd_set_descr_sets(vkw::BindPoint::GRAPHICS, fonts.layout,
 
... ... void cmd_secondary_texts(ModuleGraphics *p_mg, vkw::CmdBuffer cmd) {
260 261
261 262 for (uint32_t i = 0; i < text_data.texts.count(); ++i) { for (uint32_t i = 0; i < text_data.texts.count(); ++i) {
262 263 auto &p_text = text_data.texts[i]; auto &p_text = text_data.texts[i];
263 if (p_text->frame_index == p_mg->cmd_data.frame_index) {
264 if (p_text->frame_index == p_g->cmd_data.frame_index) {
264 265 text_data.destroy(p_text, i--); text_data.destroy(p_text, i--);
265 266 continue; continue;
266 267 } }
 
... ... void cmd_secondary_texts(ModuleGraphics *p_mg, vkw::CmdBuffer cmd) {
300 301 #endif #endif
301 302 } }
302 303 [[nodiscard]] Result [[nodiscard]] Result
303 cmd_primary_main(ModuleGraphics *p_mg, vkw::CmdBuffer cmd) {
304 auto &cmd_data = p_mg->cmd_data;
305 auto &pass = p_mg->stages.pass_main;
304 cmd_primary_main(GraphicsData *p_g, vkw::CmdBuffer cmd) {
305 auto &cmd_data = p_g->cmd_data;
306 auto &pass = p_g->stages.pass_main;
306 307
307 308 auto &renderPass = pass.render_pass; auto &renderPass = pass.render_pass;
308 309 auto &framebuffer = pass.framebuffers[cmd_data.image_index].framebuffer; auto &framebuffer = pass.framebuffers[cmd_data.image_index].framebuffer;
 
... ... cmd_primary_main(ModuleGraphics *p_mg, vkw::CmdBuffer cmd) {
314 315 auto clear_values = PassMain::Attachments::CLEAR_VALUES(); auto clear_values = PassMain::Attachments::CLEAR_VALUES();
315 316 VkRect2D renderArrea; VkRect2D renderArrea;
316 317 renderArrea.offset = {}; renderArrea.offset = {};
317 renderArrea.extent = p_mg->stages.swap_chain.extent;
318 renderArrea.extent = p_g->stages.swap_chain.extent;
318 319
319 320 vkw::StageMaskChange stage_masks; vkw::StageMaskChange stage_masks;
320 321 stage_masks.src = vkw::StageFlag::LATE_FRAGMENT_TESTS; stage_masks.src = vkw::StageFlag::LATE_FRAGMENT_TESTS;
 
... ... cmd_primary_main(ModuleGraphics *p_mg, vkw::CmdBuffer cmd) {
332 333 return cmd.end(); return cmd.end();
333 334 } }
334 335
335 using CmdFoo = void (*) (ModuleGraphics *, vkw::CmdBuffer);
336 using CmdFoo = void (*) (GraphicsData *, vkw::CmdBuffer);
336 337
337 338 [[nodiscard]] Result [[nodiscard]] Result
338 write_cmd(ModuleGraphics *p_mg, vkw::CmdBuffer cmd, CmdFoo foo, bool one_time,
339 write_cmd(GraphicsData *p_g, vkw::CmdBuffer cmd, CmdFoo foo, bool one_time,
339 340 const vkw::Inheritance *p_inh = {}) { const vkw::Inheritance *p_inh = {}) {
340 341 vkw::CmdUsageMask use = one_time ? vkw::CmdUsage::ONE_TIME_SUBMIT : 0; vkw::CmdUsageMask use = one_time ? vkw::CmdUsage::ONE_TIME_SUBMIT : 0;
341 342 if (p_inh != nullptr) if (p_inh != nullptr)
 
... ... write_cmd(ModuleGraphics *p_mg, vkw::CmdBuffer cmd, CmdFoo foo, bool one_time,
344 345 if (res != VK_SUCCESS) if (res != VK_SUCCESS)
345 346 return res; return res;
346 347
347 foo(p_mg, cmd);
348 foo(p_g, cmd);
348 349
349 350 return cmd.end(); return cmd.end();
350 351 } }
351 352 void void
352 primary_submit(ModuleGraphics *p_mg, vkw::CmdBuffer cmd,
353 primary_submit(GraphicsData *p_g, vkw::CmdBuffer cmd,
353 354 CmdData::Status *p_status, const vkw::QueueSubmit &submit, CmdData::Status *p_status, const vkw::QueueSubmit &submit,
354 355 Queue *p_queue, CmdFoo foo_write_cmd, bool one_time) { Queue *p_queue, CmdFoo foo_write_cmd, bool one_time) {
355 356 auto &status = *p_status; auto &status = *p_status;
 
... ... primary_submit(ModuleGraphics *p_mg, vkw::CmdBuffer cmd,
357 358 "task must not be submitted"); "task must not be submitted");
358 359 Result res; Result res;
359 360 if (status < CmdData::Status::CMD_WRITTEN) { if (status < CmdData::Status::CMD_WRITTEN) {
360 res = write_cmd(p_mg, cmd, foo_write_cmd, one_time);
361 res = write_cmd(p_g, cmd, foo_write_cmd, one_time);
361 362 if (res != VK_SUCCESS) if (res != VK_SUCCESS)
362 363 return; return;
363 364 status = CmdData::Status::CMD_WRITTEN; status = CmdData::Status::CMD_WRITTEN;
 
... ... primary_submit(ModuleGraphics *p_mg, vkw::CmdBuffer cmd,
369 370 status = CmdData::Status::SUBMITTED_TO_QUEUE; status = CmdData::Status::SUBMITTED_TO_QUEUE;
370 371 } }
371 372 void void
372 secondary_write(ModuleGraphics *p_mg, vkw::CmdBuffer cmd,
373 secondary_write(GraphicsData *p_g, vkw::CmdBuffer cmd,
373 374 CmdData::Status *p_status, CmdFoo foo_write_cmd, bool one_time, CmdData::Status *p_status, CmdFoo foo_write_cmd, bool one_time,
374 375 const vkw::Inheritance *p_inh) { const vkw::Inheritance *p_inh) {
375 376 jassert(*p_status < CmdData::Status::CMD_WRITTEN, jassert(*p_status < CmdData::Status::CMD_WRITTEN,
376 377 "task must not be submitted"); "task must not be submitted");
377 378 Result res; Result res;
378 res = write_cmd(p_mg, cmd, foo_write_cmd, one_time, p_inh);
379 res = write_cmd(p_g, cmd, foo_write_cmd, one_time, p_inh);
379 380 if (res != VK_SUCCESS) if (res != VK_SUCCESS)
380 381 return; return;
381 382 *p_status = CmdData::Status::CMD_WRITTEN; *p_status = CmdData::Status::CMD_WRITTEN;
 
... ... namespace TlFontAtlas {enum {
387 388 TRANSFERED = 1, LAST = TRANSFERED TRANSFERED = 1, LAST = TRANSFERED
388 389 };} };}
389 390
390 void task_cumpute_clusters(ModuleGraphics *p_mg) {
391 auto &cd = p_mg->cmd_data;
391 void task_cumpute_clusters(GraphicsData *p_g) {
392 auto &cd = p_g->cmd_data;
392 393 if (cd.clusters.compute) { if (cd.clusters.compute) {
393 p_mg->draw_data.update_lights_clusters();
394 p_g->draw_data.update_lights_clusters();
394 395 cd.clusters.compute = false; cd.clusters.compute = false;
395 396 } }
396 auto &buffer = p_mg->stages.clusters_buffer;
397 auto &buffer = p_g->stages.clusters_buffer;
397 398 auto &fr = cd.current_frame(); auto &fr = cd.current_frame();
398 399 if (fr.clusters.write_buffer) { if (fr.clusters.write_buffer) {
399 buffer.write_buffer(*p_mg->draw_data.p_clusters_data, cd.frame_index);
400 buffer.write_buffer(*p_g->draw_data.p_clusters_data, cd.frame_index);
400 401 fr.clusters.write_buffer = false; fr.clusters.write_buffer = false;
401 402 } }
402 403 auto &tl = cd.timeline<TimelineId::CLUSTERS_AND_DRAW_END>(); auto &tl = cd.timeline<TimelineId::CLUSTERS_AND_DRAW_END>();
403 404 Result res; Result res;
404 405 uint64_t sig; uint64_t sig;
405 406 sig = buffer.use_staging ? ClustersTL::COMPUTED : ClustersTL::TRANSFERED; sig = buffer.use_staging ? ClustersTL::COMPUTED : ClustersTL::TRANSFERED;
406 res = tl.signal(*p_mg->p_device, fr.timeline_value_clusters_and_draw + sig,
407 p_mg->timeline_foos);
407 res = tl.signal(*p_g->p_device, fr.timeline_value_clusters_and_draw + sig,
408 p_g->timeline_foos);
408 409 if (res == VK_SUCCESS) if (res == VK_SUCCESS)
409 410 cd.clusters.write_signaled = true; cd.clusters.write_signaled = true;
410 411 }; };
411 void task_all_transfer(ModuleGraphics *p_mg) {
412 void task_all_transfer(GraphicsData *p_g) {
412 413 Result res; Result res;
413 auto &cd = p_mg->cmd_data;
414 auto &cd = p_g->cmd_data;
414 415 auto &fr = cd.current_frame(); auto &fr = cd.current_frame();
415 auto &q = p_mg->p_device->queues.transfer;
416 auto &q = p_g->p_device->queues.transfer;
416 417 auto &cls_cmd = cd.cmd<CmdId::TRANSFER_CLUSTERS>(); auto &cls_cmd = cd.cmd<CmdId::TRANSFER_CLUSTERS>();
417 418 auto &fa_cmd = cd.cmd<CmdId::TRANSFER_FONT_ATLAS>(); auto &fa_cmd = cd.cmd<CmdId::TRANSFER_FONT_ATLAS>();
418 419 if (fr.clusters.transfer_buffer if (fr.clusters.transfer_buffer
419 420 and fr.status_transfer_clusters < CmdData::Status::CMD_WRITTEN) { and fr.status_transfer_clusters < CmdData::Status::CMD_WRITTEN) {
420 res = write_cmd(p_mg, cls_cmd, cmd_transfer_clusters, false);
421 res = write_cmd(p_g, cls_cmd, cmd_transfer_clusters, false);
421 422 if (res != VK_SUCCESS) if (res != VK_SUCCESS)
422 423 return; return;
423 424 fr.status_transfer_clusters = CmdData::Status::CMD_WRITTEN; fr.status_transfer_clusters = CmdData::Status::CMD_WRITTEN;
424 425 } }
425 426 if (fr.status_transfer_font_data < CmdData::Status::CMD_WRITTEN) { if (fr.status_transfer_font_data < CmdData::Status::CMD_WRITTEN) {
426 res = write_cmd(p_mg, fa_cmd, cmd_transfer_font_atlas, false);
427 res = write_cmd(p_g, fa_cmd, cmd_transfer_font_atlas, false);
427 428 if (res != VK_SUCCESS) if (res != VK_SUCCESS)
428 429 return; return;
429 430 fr.status_transfer_font_data = CmdData::Status::CMD_WRITTEN; fr.status_transfer_font_data = CmdData::Status::CMD_WRITTEN;
 
... ... void task_all_transfer(ModuleGraphics *p_mg) {
466 467 fr.status_transfer_font_data = CmdData::Status::SUBMITTED_TO_QUEUE; fr.status_transfer_font_data = CmdData::Status::SUBMITTED_TO_QUEUE;
467 468 fr.clusters.transfer_buffer = false; fr.clusters.transfer_buffer = false;
468 469 } }
469 void task_shadow_map_models(ModuleGraphics *p_mg) {
470 auto &cd = p_mg->cmd_data;
471 p_mg->draw_data.update_shadow_light_ubd(&p_mg->stages, cd.frame_index);
470 void task_shadow_map_models(GraphicsData *p_g) {
471 auto &cd = p_g->cmd_data;
472 p_g->draw_data.update_shadow_light_ubd(&p_g->stages, cd.frame_index);
472 473 auto &fr = cd.current_frame(); auto &fr = cd.current_frame();
473 auto &q = p_mg->p_device->queues.graphics;
474 auto &q = p_g->p_device->queues.graphics;
474 475 auto &cmd = cd.cmd<CmdId::SHADOW_OMNI>(); auto &cmd = cd.cmd<CmdId::SHADOW_OMNI>();
475 476 vkw::QueueSubmit submit(cmd); vkw::QueueSubmit submit(cmd);
476 primary_submit(p_mg, cmd, &fr.status_shadow_map, submit,
477 primary_submit(p_g, cmd, &fr.status_shadow_map, submit,
477 478 &q, cmd_primary_shadow_map_models, true); &q, cmd_primary_shadow_map_models, true);
478 479 } }
479 void task_secondary_models(ModuleGraphics *p_mg) {
480 auto &cd = p_mg->cmd_data;
480 void task_secondary_models(GraphicsData *p_g) {
481 auto &cd = p_g->cmd_data;
481 482 auto &fr = cd.current_frame(); auto &fr = cd.current_frame();
482 483 auto &cmd = cd.cmd<CmdId::MODELS>(); auto &cmd = cd.cmd<CmdId::MODELS>();
483 auto &pass = p_mg->stages.pass_main;
484 auto &pass = p_g->stages.pass_main;
484 485 vkw::Inheritance inh(pass.render_pass, vkw::Inheritance inh(pass.render_pass,
485 486 pass.framebuffers[cd.image_index].framebuffer, 0); pass.framebuffers[cd.image_index].framebuffer, 0);
486 secondary_write(p_mg, cmd, &fr.status_models, cmd_secondary_models,
487 secondary_write(p_g, cmd, &fr.status_models, cmd_secondary_models,
487 488 true, &inh); true, &inh);
488 489 } }
489 void task_secondary_composition(ModuleGraphics *p_mg) {
490 auto &cd = p_mg->cmd_data;
490 void task_secondary_composition(GraphicsData *p_g) {
491 auto &cd = p_g->cmd_data;
491 492 auto &img = cd.current_image(); auto &img = cd.current_image();
492 493 auto &cmd = cd.cmd<CmdId::COMPOSITION>(); auto &cmd = cd.cmd<CmdId::COMPOSITION>();
493 auto &pass = p_mg->stages.pass_main;
494 auto &pass = p_g->stages.pass_main;
494 495 vkw::Inheritance inh(pass.render_pass, vkw::Inheritance inh(pass.render_pass,
495 496 pass.framebuffers[cd.image_index].framebuffer, 1); pass.framebuffers[cd.image_index].framebuffer, 1);
496 secondary_write(p_mg, cmd, &img.status_composition,
497 secondary_write(p_g, cmd, &img.status_composition,
497 498 cmd_secondary_composition, false, &inh); cmd_secondary_composition, false, &inh);
498 499 } }
499 void task_secondary_texts(ModuleGraphics *p_mg) {
500 auto &cd = p_mg->cmd_data;
500 void task_secondary_texts(GraphicsData *p_g) {
501 auto &cd = p_g->cmd_data;
501 502 auto &img = cd.current_image(); auto &img = cd.current_image();
502 503 auto &cmd = cd.cmd<CmdId::TEXTS>(); auto &cmd = cd.cmd<CmdId::TEXTS>();
503 auto &pass = p_mg->stages.pass_main;
504 auto &pass = p_g->stages.pass_main;
504 505 vkw::Inheritance inh(pass.render_pass, vkw::Inheritance inh(pass.render_pass,
505 506 pass.framebuffers[cd.image_index].framebuffer, 1); pass.framebuffers[cd.image_index].framebuffer, 1);
506 secondary_write(p_mg, cmd, &img.status_texts, cmd_secondary_texts,
507 secondary_write(p_g, cmd, &img.status_texts, cmd_secondary_texts,
507 508 true, &inh); true, &inh);
508 509 } }
509 510
510 void task_primary_main(ModuleGraphics *p_mg) {
511 auto &cd = p_mg->cmd_data;
511 void task_primary_main(GraphicsData *p_g) {
512 auto &cd = p_g->cmd_data;
512 513 auto &cmd = cd.cmd<CmdId::PRIMARY>(); auto &cmd = cd.cmd<CmdId::PRIMARY>();
513 514 auto &fr = cd.current_frame(); auto &fr = cd.current_frame();
514 515 Result res; Result res;
515 516 if (fr.status_primary < CmdData::Status::CMD_WRITTEN) { if (fr.status_primary < CmdData::Status::CMD_WRITTEN) {
516 res = cmd_primary_main(p_mg, cmd);
517 res = cmd_primary_main(p_g, cmd);
517 518 if (res != VK_SUCCESS) if (res != VK_SUCCESS)
518 519 return; return;
519 520 fr.status_primary = CmdData::Status::CMD_WRITTEN; fr.status_primary = CmdData::Status::CMD_WRITTEN;
 
... ... void task_primary_main(ModuleGraphics *p_mg) {
548 549 vkw::TimelineSubmit tl({wait_values.begin(), wait.semaphores.count32()}, vkw::TimelineSubmit tl({wait_values.begin(), wait.semaphores.count32()},
549 550 signal_values); signal_values);
550 551 vkw::QueueSubmit submit(cmd, wait, signals, &tl); vkw::QueueSubmit submit(cmd, wait, signals, &tl);
551 res = p_mg->p_device->queues.graphics.submit_locked(submit);
552 res = p_g->p_device->queues.graphics.submit_locked(submit);
552 553 if (res != VK_SUCCESS) if (res != VK_SUCCESS)
553 554 return; return;
554 555 fr.status_primary = CmdData::Status::SUBMITTED_TO_QUEUE; fr.status_primary = CmdData::Status::SUBMITTED_TO_QUEUE;
 
... ... void task_primary_main(ModuleGraphics *p_mg) {
556 557 } }
557 558 } }
558 559
559 [[nodiscard]] Result ModuleGraphics::update_stages() {
560 [[nodiscard]] Result GraphicsData::update_stages() {
560 561 auto &queue = p_device->queues.graphics; auto &queue = p_device->queues.graphics;
561 562 queue.lock(); queue.lock();
562 563 queue.queue.wait_idle(); queue.queue.wait_idle();
 
... ... void task_primary_main(ModuleGraphics *p_mg) {
571 572 return VK_SUCCESS; return VK_SUCCESS;
572 573 } }
573 574
574 [[nodiscard]] Result ModuleGraphics::acquire_image() {
575 [[nodiscard]] Result GraphicsData::acquire_image() {
575 576 if (cmd_data.is_image_acquired) CHECK: if (cmd_data.is_image_acquired) CHECK:
576 577 return cmd_data.prepare_per_image(p_device, stages.swap_chain.image_count); return cmd_data.prepare_per_image(p_device, stages.swap_chain.image_count);
577 578 BEGIN: BEGIN:
 
... ... BEGIN:
610 611 cmd_data.on_acquire(im_index); cmd_data.on_acquire(im_index);
611 612 goto CHECK; goto CHECK;
612 613 } }
613 [[nodiscard]] Result ModuleGraphics::present() {
614 [[nodiscard]] Result GraphicsData::present() {
614 615 Result res; Result res;
615 616 auto &queue_present = p_device->queues.present; auto &queue_present = p_device->queues.present;
616 617 auto &wait_semaphore = cmd_data.semaphore<SemId::DRAW_IMAGE>(); auto &wait_semaphore = cmd_data.semaphore<SemId::DRAW_IMAGE>();
 
... ... BEGIN:
630 631 draw_frame(const jl::rarray<const Model> &models) draw_frame(const jl::rarray<const Model> &models)
631 632 { {
632 633 Result res; Result res;
633 draw_data.models = models;
634 res = acquire_image();
634 p->draw_data.models = models;
635 res = p->acquire_image();
635 636 if (res != VK_SUCCESS) if (res != VK_SUCCESS)
636 637 return res; return res;
637 638
638 draw_data.update_states();
639 cmd_data.update(true, draw_data.is_lights_out_of_date,
640 stages.clusters_buffer.use_staging, true);
639 p->draw_data.update_states();
640 p->cmd_data.update(true, p->draw_data.is_lights_out_of_date,
641 p->stages.clusters_buffer.use_staging, true);
641 642
642 auto &fr = cmd_data.current_frame();
643 auto &fr = p->cmd_data.current_frame();
643 644 if (fr.wait_draw_timeline) { if (fr.wait_draw_timeline) {
644 auto &timeline = cmd_data.timeline<TimelineId::CLUSTERS_AND_DRAW_END>();
645 auto timeout = settings.wait_for_gpu_frame_draw ? vkw::TIMEOUT_INFINITE : 0;
645 auto &timeline = p->cmd_data.timeline<TimelineId::CLUSTERS_AND_DRAW_END>();
646 auto timeout = p->settings.wait_for_gpu_frame_draw
647 ? vkw::TIMEOUT_INFINITE : 0;
646 648 uint64_t wait_val; uint64_t wait_val;
647 649 wait_val = fr.timeline_value_clusters_and_draw + ClustersTL::DRAW_DONE; wait_val = fr.timeline_value_clusters_and_draw + ClustersTL::DRAW_DONE;
648 res = timeline.wait(p_device->device, wait_val, timeout, timeline_foos);
650 res = timeline.wait(p->p_device->device, wait_val, timeout,
651 p->timeline_foos);
649 652 if (res != VK_SUCCESS) { if (res != VK_SUCCESS) {
650 653 if (res == VK_TIMEOUT) if (res == VK_TIMEOUT)
651 654 return VK_NOT_READY; return VK_NOT_READY;
 
... ... draw_frame(const jl::rarray<const Model> &models)
656 659 fr.timeline_value_clusters_and_draw += ClustersTL::LAST; fr.timeline_value_clusters_and_draw += ClustersTL::LAST;
657 660 fr.timeline_value_font_atlas += TlFontAtlas::LAST; fr.timeline_value_font_atlas += TlFontAtlas::LAST;
658 661 } }
659 auto &im = cmd_data.current_image();
662 auto &im = p->cmd_data.current_image();
660 663 #ifdef JEN_PARALLEL_DRAW_FRAME #ifdef JEN_PARALLEL_DRAW_FRAME
661 jth::Pool &pool = p_instance->thread_pool;
662 auto pool_queue_i = p_instance->thread_pool_queue_indices.drawFrame;
664 jth::Pool &pool = p->p_instance->thread_pool;
665 auto pool_queue_i = p->p_instance->thread_pool_queue_indices.drawFrame;
663 666 auto task_submit = [&pool, &pool_queue_i, this](auto foo) { auto task_submit = [&pool, &pool_queue_i, this](auto foo) {
664 return pool.task_submit(jth::Task(this, foo), pool_queue_i);
667 return pool.task_submit(jth::Task(p, foo), pool_queue_i);
665 668 }; };
666 669 #else #else
667 670 auto task_submit = [this](auto foo) { auto task_submit = [this](auto foo) {
668 foo(this);
671 foo(p);
669 672 return foo ? true : false; //removes "code never be executed "warnings return foo ? true : false; //removes "code never be executed "warnings
670 673 }; };
671 674 #endif #endif
672 675 do { do {
673 if (cmd_data.clusters.compute or fr.clusters.write_buffer
674 or not cmd_data.clusters.write_signaled)
676 if (p->cmd_data.clusters.compute or fr.clusters.write_buffer
677 or not p->cmd_data.clusters.write_signaled)
675 678 if (not task_submit(task_cumpute_clusters)) if (not task_submit(task_cumpute_clusters))
676 679 break; break;
677 680 if (fr.clusters.transfer_buffer or if (fr.clusters.transfer_buffer or
 
... ... draw_frame(const jl::rarray<const Model> &models)
698 701 return VK_ERROR_OUT_OF_HOST_MEMORY; return VK_ERROR_OUT_OF_HOST_MEMORY;
699 702 EXIT: EXIT:
700 703
701 draw_data.update_camera_ubd(&stages, cmd_data.frame_index);
702 destroyer.clean_by_index(cmd_data.frame_index);
704 p->draw_data.update_camera_ubd(&p->stages, p->cmd_data.frame_index);
705 p->destroyer.clean_by_index(p->cmd_data.frame_index);
703 706
704 707 #ifdef JEN_PARALLEL_DRAW_FRAME #ifdef JEN_PARALLEL_DRAW_FRAME
705 708 pool.wait_idle(); //TODO wait idle queue; pool.wait_idle(); //TODO wait idle queue;
706 709 #endif #endif
707 710
708 711 if ((fr.clusters.transfer_wait_signal if ((fr.clusters.transfer_wait_signal
709 and not cmd_data.clusters.write_signaled)
712 and not p->cmd_data.clusters.write_signaled)
710 713 or or
711 714 fr.clusters.transfer_buffer fr.clusters.transfer_buffer
712 715 or or
 
... ... EXIT:
721 724 im.status_texts < CmdData::Status::CMD_WRITTEN) im.status_texts < CmdData::Status::CMD_WRITTEN)
722 725 return VK_NOT_READY; return VK_NOT_READY;
723 726
724 task_primary_main(this);
727 task_primary_main(p);
725 728 if (fr.status_primary < CmdData::Status::SUBMITTED_TO_QUEUE) if (fr.status_primary < CmdData::Status::SUBMITTED_TO_QUEUE)
726 729 return VK_NOT_READY; return VK_NOT_READY;
727 res = present();
730 res = p->present();
728 731 if (res == VK_SUCCESS) { if (res == VK_SUCCESS) {
729 732 jl::time current_time = jl::time::current(); jl::time current_time = jl::time::current();
730 jl::time elapsed = last_draw_time.elapsed(current_time);
731 last_draw_time = current_time;
733 jl::time elapsed = p->last_draw_time.elapsed(current_time);
734 p->last_draw_time = current_time;
732 735
733 if (p_debug_overlay != nullptr) {
734 if (settings.debug_overlay.is_visible) {
735 res = p_debug_overlay->update(this, elapsed);
736 if (p->p_debug_overlay != nullptr) {
737 if (p->settings.debug_overlay.is_visible) {
738 res = p->p_debug_overlay->update({p}, elapsed);
736 739 if (res != VK_SUCCESS) if (res != VK_SUCCESS)
737 740 return res; return res;
738 741 } }
739 else p_debug_overlay->disable(this);
742 else p->p_debug_overlay->disable({p});
740 743 } }
741 744 } }
742 745 return res; return res;
File src/graphics/graphics.h changed (mode: 100644) (index fd6aeb7..7a8776e)
2 2
3 3 #include "draw_data/draw_data.h" #include "draw_data/draw_data.h"
4 4 #include "cmd_data.h" #include "cmd_data.h"
5 #include "../device/device.h"
6 5 #include <jlib/time.h> #include <jlib/time.h>
6 #include <jen/graphics.h>
7 7
8 namespace jen {
9 struct DebugOverlay;
8 struct jen::GraphicsData {
10 9
11 struct ModuleGraphics
12 {
13 [[nodiscard]] Result
14 init(Instance*, vk::Device*, const GraphicsSettings&);
10 [[nodiscard]] Result
11 init(Instance*, Device*, const GraphicsSettings&);
15 12
16 void destroy() { destroy(9999); }
13 void destroy() { destroy(9999); }
17 14
18 [[nodiscard]] Result apply_settings();
19 void apply_camera(const Camera&, const Frustum&);
20 void apply_light_shadow(const Light&);
21 void apply_lights(LightsDraw *p_lights);
15 Instance *p_instance;
16 Device *p_device;
22 17
23 [[nodiscard]] Result
24 create(const WriteData&, GpuData **pp_dst, bool free_source);
18 GraphicsSettings settings;
25 19
26 /// @param p_allocated externally allocated GpuData memory,
27 /// will be deallocated after destroy(GpuData*,bool)
28 [[nodiscard]] Result
29 create(const WriteData&, GpuData *p_allocated, bool free_source);
20 vk::DrawStages stages;
21 vk::CmdData cmd_data;
22 vk::GpuTransfer gpu_transfer;
23 vk::DrawData draw_data;
24 vk::TextData text_data;
25 vk::Destroyer destroyer;
30 26
31 [[nodiscard]] Result
32 create(const jrf::Image *p_texture, GpuTexture **pp_dst, bool free_src);
27 vkw::TimelineFunctions timeline_foos;
33 28
34 [[nodiscard]] bool
35 create(const char* font_path, GlyphManager **pp_dst);
29 DebugOverlay *p_debug_overlay;
30 jl::time last_draw_time;
31 jl::time elapsed_per_frame;
36 32
37 /// @param pp_text Valid handle or nullptr.
38 /// after calling with nullptr important to fill Text.data member
39 /// Text.data can be changed at any time for changing rendering options
40 /// for next frame draw
41 [[nodiscard]] Result
42 text_update(Text::Layout layout, uint16_t pixel_size, Text::Chars chars,
43 Text::Colors_RGBA colors, GlyphManager *p_font, Text **pp_text);
44
45 void destroy(GlyphManager *p_font);
46 void destroy(Text *p_text);
47
48 void destroy(GpuTexture*, bool destroy_src_image);
49 void destroy(GpuData*, bool destroy_source);
50
51 [[nodiscard]] Result draw_frame(const jl::rarray<const Model> &models);
52
53 [[nodiscard]] Result update_settings_from_input();
54
55 Instance *p_instance;
56 vk::Device *p_device;
57
58 GraphicsSettings settings;
59
60 vk::DrawStages stages;
61 vk::CmdData cmd_data;
62 vk::GpuTransfer gpu_transfer;
63 vk::DrawData draw_data;
64 vk::TextData text_data;
65 vk::Destroyer destroyer;
66
67 vkw::TimelineFunctions timeline_foos;
68
69 DebugOverlay *p_debug_overlay;
70 jl::time last_draw_time;
71 jl::time elapsed_per_frame;
72
73 using PF_User = void(*)(void*);
74
75 struct Loop {
76 void run(ModuleGraphics *p_mg, void *p_update_arg, PF_User pf_update);
77
78 Result result;
79 jl::time last_update_time;
80 jl::time elapsed_after_update;
81 bool pause;
82 bool is_drawn;
83 bool draw;
84 bool break_loop;
85 bool wait_events;
86 jl::rarray<const Model> models;
87 };
88 private:
89 [[nodiscard]] Result acquire_image();
90 [[nodiscard]] Result present();
91 [[nodiscard]] Result update_stages();
92 void destroy(int code);
93 };
94 }
33 private:
34 friend ModuleGraphics;
35 [[nodiscard]] Result acquire_image();
36 [[nodiscard]] Result present();
37 [[nodiscard]] Result update_stages();
38 void destroy(int code);
39 };
File src/graphics/graphics_interface.cpp changed (mode: 100644) (index bf3f59a..f563e8e)
1 1 #include "graphics.h" #include "graphics.h"
2 2 #include "debug_overlay.h" #include "debug_overlay.h"
3 #include "../instance/instance.h"
3 4
4 5 using namespace jen; using namespace jen;
5 6 using namespace jen::vk; using namespace jen::vk;
6 7
7 [[nodiscard]] Result ModuleGraphics::
8 [[nodiscard]] Result GraphicsData::
8 9 init(Instance *p_inst, Device *p_dev, const GraphicsSettings &setts) init(Instance *p_inst, Device *p_dev, const GraphicsSettings &setts)
9 10 { {
10 11 p_instance = p_inst; p_instance = p_inst;
 
... ... init(Instance *p_inst, Device *p_dev, const GraphicsSettings &setts)
30 31 goto EXIT; goto EXIT;
31 32 } }
32 33
33 res = text_data.init(&p_device->buffer_allocator, stages.stages.fonts);
34 res = text_data.init(p_device->buffer_allocator, stages.stages.fonts);
34 35 if (res != VK_SUCCESS) { if (res != VK_SUCCESS) {
35 36 destroy(2); destroy(2);
36 37 goto EXIT; goto EXIT;
 
... ... init(Instance *p_inst, Device *p_dev, const GraphicsSettings &setts)
58 59 destroy(6); destroy(6);
59 60 goto EXIT; goto EXIT;
60 61 } }
61 res = p_debug_overlay->init(this, settings.debug_overlay.font_path);
62 res = p_debug_overlay->init({this}, settings.debug_overlay.font_path);
62 63 if (res != VK_SUCCESS) { if (res != VK_SUCCESS) {
63 64 destroy(7); destroy(7);
64 65 goto EXIT; goto EXIT;
 
... ... EXIT:
75 76 return res; return res;
76 77 } }
77 78
78 void ModuleGraphics::destroy(int code)
79 void GraphicsData::destroy(int code)
79 80 { {
80 81 (void)p_device->device.wait_idle(); (void)p_device->device.wait_idle();
81 82
 
... ... void ModuleGraphics::destroy(int code)
83 84 { {
84 85 default: default:
85 86 [[fallthrough]]; case 8: if (p_debug_overlay != nullptr) [[fallthrough]]; case 8: if (p_debug_overlay != nullptr)
86 p_debug_overlay->destroy(this);
87 p_debug_overlay->destroy({this});
87 88 [[fallthrough]]; case 7: if (p_debug_overlay != nullptr) [[fallthrough]]; case 7: if (p_debug_overlay != nullptr)
88 89 jl::deallocate(&p_debug_overlay); jl::deallocate(&p_debug_overlay);
89 90 [[fallthrough]]; case 6: draw_data.destroy(); [[fallthrough]]; case 6: draw_data.destroy();
 
... ... create(const WriteData &data, GpuData *p_allocated, bool free_source) {
100 101 p_allocated->state = ResourceState::LOADING; p_allocated->state = ResourceState::LOADING;
101 102 p_allocated->source = data; p_allocated->source = data;
102 103 p_allocated->destroy_source = free_source; p_allocated->destroy_source = free_source;
103 return gpu_transfer.submit(GpuTransfer::Priority::LOW, p_allocated);
104 return p->gpu_transfer.submit(GpuTransfer::Priority::LOW, p_allocated);
104 105 } }
105 106 [[nodiscard]] Result ModuleGraphics:: [[nodiscard]] Result ModuleGraphics::
106 107 create(const WriteData &data, GpuData **pp_dst, bool free_source) { create(const WriteData &data, GpuData **pp_dst, bool free_source) {
 
... ... void ModuleGraphics::destroy(GpuData *p_data, bool destroy_source) {
118 119 if (p_data == nullptr) if (p_data == nullptr)
119 120 return; return;
120 121 p_data->destroy_source = destroy_source; p_data->destroy_source = destroy_source;
121 destroyer << p_data;
122 p->destroyer << p_data;
122 123 } }
123 124
124 125 [[nodiscard]] Result ModuleGraphics:: [[nodiscard]] Result ModuleGraphics::
 
... ... create(const jrf::Image *p_t, GpuTexture **pp_dst, bool free_source){
126 127 if (not jl::allocate(pp_dst)) if (not jl::allocate(pp_dst))
127 128 return VK_ERROR_OUT_OF_HOST_MEMORY; return VK_ERROR_OUT_OF_HOST_MEMORY;
128 129 (*pp_dst)->init(p_t, free_source); (*pp_dst)->init(p_t, free_source);
129 auto r = gpu_transfer.submit(GpuTransfer::Priority::LOW,
130 *pp_dst, stages.textureSampler);
130 auto r = p->gpu_transfer.submit(GpuTransfer::Priority::LOW,
131 *pp_dst, p->stages.textureSampler);
131 132 if (r != VK_SUCCESS) if (r != VK_SUCCESS)
132 133 jl::deallocate(pp_dst); jl::deallocate(pp_dst);
133 134 return r; return r;
 
... ... destroy(GpuTexture* p_gpuTexture, bool destroy_src) {
138 139 if (p_gpuTexture == nullptr) if (p_gpuTexture == nullptr)
139 140 return; return;
140 141 p_gpuTexture->is_source_destroy_allowed = destroy_src; p_gpuTexture->is_source_destroy_allowed = destroy_src;
141 destroyer << p_gpuTexture;
142 p->destroyer << p_gpuTexture;
142 143 } }
143 144
144 145 [[nodiscard]] Result ModuleGraphics:: apply_settings() { [[nodiscard]] Result ModuleGraphics:: apply_settings() {
145 return update_stages();
146 return p->update_stages();
147 }
148
149 jen::ResourceState jen::resource_state(const jen::GpuData * const p) {
150 return p->state;
151 }
152 jen::ResourceState jen::resource_state(const jen::GpuTexture* const p) {
153 return p->state;
146 154 } }
147 155
148 156 void ModuleGraphics::apply_camera(const Camera &cam, const Frustum &fru) { void ModuleGraphics::apply_camera(const Camera &cam, const Frustum &fru) {
149 draw_data.apply_camera(cam, fru);
157 p->draw_data.apply_camera(cam, fru);
150 158 } }
151 159 void ModuleGraphics::apply_light_shadow(const Light &light) { void ModuleGraphics::apply_light_shadow(const Light &light) {
152 draw_data.apply_shadow_light(light);
160 p->draw_data.apply_shadow_light(light);
153 161 } }
154 162 void ModuleGraphics::apply_lights(LightsDraw *p_lights) { void ModuleGraphics::apply_lights(LightsDraw *p_lights) {
155 draw_data.apply_lights(p_lights);
163 p->draw_data.apply_lights(p_lights);
156 164 } }
157 165
158 166 [[nodiscard]] bool ModuleGraphics:: [[nodiscard]] bool ModuleGraphics::
159 167 create(const char* font_path, GlyphManager **pp_dst) { create(const char* font_path, GlyphManager **pp_dst) {
160 return text_data.create_font(font_path, pp_dst);
168 return p->text_data.create_font(font_path, pp_dst);
161 169 } }
162 170 [[nodiscard]] Result ModuleGraphics:: [[nodiscard]] Result ModuleGraphics::
163 text_update (Text::Layout layout, uint16_t pixel_size, Text::Chars chars,
164 Text::Colors_RGBA colors, GlyphManager *p_font, Text **pp_text) {
171 text_update (TextLayout layout, uint16_t pixel_size, Chars chars,
172 Colors_RGBA colors, GlyphManager *p_font, GpuText **pp_text) {
165 173 return p_font->text_update(layout, pixel_size, chars, colors, return p_font->text_update(layout, pixel_size, chars, colors,
166 cmd_data.frame_index, pp_text);
174 p->cmd_data.frame_index, pp_text);
167 175 } }
168 176 void ModuleGraphics::destroy(GlyphManager *p_font) { void ModuleGraphics::destroy(GlyphManager *p_font) {
169 177 if (p_font != nullptr) if (p_font != nullptr)
170 text_data.destroy_font(p_font, cmd_data.frame_index);
178 p->text_data.destroy_font(p_font, p->cmd_data.frame_index);
171 179 } }
172 void ModuleGraphics::destroy(Text *p_text) {
180 void ModuleGraphics::destroy(GpuText *p_text) {
173 181 if (p_text != nullptr) if (p_text != nullptr)
174 p_text->destroy_mark(cmd_data.frame_index);
182 p_text->destroy_mark(p->cmd_data.frame_index);
175 183 } }
176 184
177 185 template<typename Foo, typename ...Args> template<typename Foo, typename ...Args>
 
... ... bool call_key(Key::Board key, bool *p_pressed, const Window &input, Foo foo,
200 208 : Window::CursorMode::DISABLED); : Window::CursorMode::DISABLED);
201 209 }; };
202 210
203 auto p_settings = &settings;
211 auto p_settings = &p->settings;
204 212
205 213 auto polygon_mode = [p_settings]() { auto polygon_mode = [p_settings]() {
206 214 auto &pm = p_settings->draw_mode; auto &pm = p_settings->draw_mode;
 
... ... bool call_key(Key::Board key, bool *p_pressed, const Window &input, Foo foo,
209 217 pm = GraphicsSettings::DrawMode::DEFAULT; pm = GraphicsSettings::DrawMode::DEFAULT;
210 218 }; };
211 219 auto cull_mode = [p_settings]() { auto cull_mode = [p_settings]() {
212 p_settings->cull_mode = p_settings->cull_mode == vkw::CullMode::NONE
213 ? vkw::CullMode::BACK : vkw::CullMode::NONE;
220 using CM = GraphicsSettings::CullMode;
221 p_settings->cull_mode = p_settings->cull_mode == CM::NONE
222 ? CM::BACK : CM::NONE;
214 223 }; };
215 224 auto light_mode = [p_settings]() { auto light_mode = [p_settings]() {
216 225 using Shading = jen::GraphicsSettings::Shading; using Shading = jen::GraphicsSettings::Shading;
 
... ... bool call_key(Key::Board key, bool *p_pressed, const Window &input, Foo foo,
234 243 }; };
235 244 auto toggle = [](bool *p_d) { *p_d = not *p_d; }; auto toggle = [](bool *p_d) { *p_d = not *p_d; };
236 245
237 auto &w = p_instance->window;
246 auto &w = p->p_instance->window;
238 247 bool changed = false; bool changed = false;
239 248 call_key(p_settings->debug_overlay.toggle_key, call_key(p_settings->debug_overlay.toggle_key,
240 249 is_f_pressed, w, toggle, is_f_pressed, w, toggle,
 
... ... bool call_key(Key::Board key, bool *p_pressed, const Window &input, Foo foo,
256 265 } }
257 266
258 267 void ModuleGraphics::Loop:: void ModuleGraphics::Loop::
259 run(ModuleGraphics *p_mg, void *p_update_arg, PF_User pf_update)
268 run(ModuleGraphics mg, void *p_update_arg, PF_User pf_update)
260 269 { {
261 270 result = VK_SUCCESS; result = VK_SUCCESS;
262 271 pause = false; pause = false;
 
... ... run(ModuleGraphics *p_mg, void *p_update_arg, PF_User pf_update)
270 279
271 280 bool pause_hold = false; bool pause_hold = false;
272 281
273 auto &window = p_mg->p_instance->window;
282 auto &window = mg.p->p_instance->window;
274 283 window.set_visibility(true); window.set_visibility(true);
275 284
276 285 while (not window.is_window_close_fired()) while (not window.is_window_close_fired())
 
... ... run(ModuleGraphics *p_mg, void *p_update_arg, PF_User pf_update)
285 294 } }
286 295
287 296 if (wait_events) if (wait_events)
288 p_mg->p_instance->window.wait();
297 mg.p->p_instance->window.wait();
289 298 else else
290 p_mg->p_instance->window.poll();
299 mg.p->p_instance->window.poll();
291 300
292 301 auto update_time = jl::time::current(); auto update_time = jl::time::current();
293 302 elapsed_after_update = last_update_time.elapsed(update_time); elapsed_after_update = last_update_time.elapsed(update_time);
 
... ... run(ModuleGraphics *p_mg, void *p_update_arg, PF_User pf_update)
302 311 continue; continue;
303 312
304 313
305 result = p_mg->draw_frame(models);
314 result = mg.draw_frame(models);
306 315 is_drawn = result == VK_SUCCESS; is_drawn = result == VK_SUCCESS;
307 316 if (is_drawn) if (is_drawn)
308 317 window.is_damaged = false; window.is_damaged = false;
File src/graphics/jrl_defs.h deleted (index a552820..0000000)
1 #pragma once
2 #include "graphics.h"
3 #include <jlib/darray_mods.h>
4 #include <jrf/read.h>
5
6 namespace jen::detail
7 {
8 struct RM_Image : TextureData {
9 void destroy(ModuleGraphics *p_mg) {
10 p_mg->destroy(p_data, true);
11 }
12 };
13 struct RM_Vertices : VertexData {
14 void destroy(ModuleGraphics *p_mg) {
15 p_mg->destroy(p_data, true);
16 }
17 };
18 struct RM_Indices : IndexData {
19 void destroy(ModuleGraphics *p_mg) {
20 p_mg->destroy(p_data, true);
21 }
22 };
23
24 template<typename T>
25 struct ResRef {
26 template<typename ...Args>
27 void destroy(Args...args) {
28 if (is_data)
29 u.res.destroy(args...);
30 else
31 u.path.destroy();
32 }
33
34 union {
35 T res;
36 jl::string path;
37 } u;
38 bool is_data;
39 };
40
41 struct RM_Mesh {
42 void destroy(ModuleGraphics *p_mg) {
43 ver.destroy(p_mg);
44 ind.destroy(p_mg);
45 }
46 ResRef<RM_Vertices> ver;
47 ResRef<RM_Indices> ind;
48 };
49 struct RM_Model {
50 void destroy(ModuleGraphics *p_mg) {
51 mesh .destroy(p_mg);
52 image.destroy(p_mg);
53 }
54
55 ResRef<RM_Mesh> mesh;
56 ResRef<RM_Image> image;
57 };
58 struct RM_Scene : jrf::Scene {
59 void destroy(ModuleGraphics* =nullptr) {
60 jrf::Scene::destroy();
61 }
62 };
63
64 template<jrf::ResourceType>
65 struct RM_Resource { using T = void; };
66 template<>
67 struct RM_Resource<jrf::IMAGE> { using T = RM_Image; };
68 template<>
69 struct RM_Resource<jrf::VERTICES> { using T = RM_Vertices; };
70 template<>
71 struct RM_Resource<jrf::INDICES> { using T = RM_Indices; };
72 template<>
73 struct RM_Resource<jrf::MESH> { using T = RM_Mesh; };
74 template<>
75 struct RM_Resource<jrf::MODEL> { using T = RM_Model; };
76 template<>
77 struct RM_Resource<jrf::SCENE> { using T = RM_Scene; };
78 }
79
80 namespace jen {
81 template<jrf::ResourceType RT>
82 struct Resource {
83 constexpr static const jrf::ResourceType TYPE = RT;
84 using T = typename detail::RM_Resource<RT>::T;
85 operator const T& () { return res; }
86
87 jl::string path;
88 T res;
89 };
90 }
91
92 namespace jen::detail {
93 template<jrf::ResourceType RT>
94 struct ResHandle : Resource<RT> {
95 using Resource<RT>::path;
96 using Resource<RT>::res;
97
98 void destroy(ModuleGraphics *p_mg) {
99 res.destroy(p_mg);
100 path.destroy();
101 }
102
103 [[nodiscard]] bool
104 operator >= (const ResHandle &r) const { return path >= r.path; }
105 [[nodiscard]] bool
106 operator >= (const jl::string_ro &str) const { return str <= path; }
107 [[nodiscard]] bool
108 operator > (const jl::string_ro &str) const { return str < path; }
109
110 uint64_t use_count;
111 };
112 }
File src/graphics/model.h deleted (index 337e2cc..0000000)
1 #pragma once
2
3 #include "draw_stages/offscreen/offscreen.h"
4 #include "resources/data.h"
5 #include "resources/texture.h"
6 #include "cmd_data.h"
7
8 namespace jen
9 {
10 struct VertexData {
11 GpuData *p_data;
12 VAttrsOffsets offsets;
13 uint32_t count;
14 };
15 struct IndexData {
16 GpuData *p_data;
17 vkw::DeviceSize offset;
18 uint32_t count;
19 vkw::IndexType type;
20 };
21 struct TextureData {
22 GpuTexture *p_data;
23 uint32_t layer_index;
24 };
25 struct ModelWorld {
26 math::m4f transform;
27 math::v3f position;
28 math::v3i32 position_shift;
29 };
30
31 struct Model {
32 VertexData ver;
33 IndexData ind;
34 TextureData tex;
35 ModelWorld world;
36
37 [[nodiscard]] bool is_ready_to_draw() const {
38 if (tex.p_data->state != ResourceState::DONE)
39 return false;
40 if (ver.p_data->state != ResourceState::DONE)
41 return false;
42
43 if ( ind.p_data != nullptr
44 and ver.p_data->state != ResourceState::DONE
45 and ind.count != 0)
46 return false;
47
48 return true;
49 }
50 };
51 }
File src/graphics/resources.h added (mode: 100644) (index 0000000..934f588)
1 #pragma once
2 #include <jen/resources.h>
3 #include <jen/detail/gpu_image.h>
4 #include <jrf/image.h>
5 #include "cmd_data.h"
6
7 struct jen::GpuData {
8 DeviceBufferPart allocation;
9 ResourceState state;
10 WriteData source;
11 bool destroy_source;
12
13 void destroy_source_if_allowed() {
14 if (not destroy_source || source.p == nullptr)
15 return;
16 jl::deallocate(&source.p);
17 destroy_source = false;
18 source.p = nullptr;
19 }
20 };
21 static_assert(jen::GPU_DATA_ALLOCATION_SIZE == sizeof(jen::GpuData));
22
23 struct jen::GpuTexture {
24 void init(const jrf::Image *p_src, bool destroy_source) {
25 state = ResourceState::LOADING;
26 is_source_destroy_allowed = destroy_source;
27 source = *p_src;
28 mip_levels = mip_level(p_src->extent.width, p_src->extent.height);
29 }
30 void destroy_source_if_allowed() {
31 if (is_source_destroy_allowed) {
32 source.destroy();
33 is_source_destroy_allowed = false;
34 }
35 }
36 static uint32_t mip_level(uint32_t width, uint32_t height) {
37 return uint32_t(std::floor(std::log2(jl::max(width, height)))) + 1;
38 }
39 GpuImage<GpuImageMode::VIEW> gpu_im;
40 DescriptorTexture descriptor;
41 ResourceState state;
42 uint32_t mip_levels;
43 bool is_source_destroy_allowed;
44 jrf::Image source;
45 };
46 static_assert(jen::GPU_TEXTURE_ALLOCATION_SIZE == sizeof(jen::GpuTexture));
47
48
49 struct jen::GpuText
50 {
51 [[nodiscard]] static GpuText* new_() { return nullptr; }
52
53 void
54 fill(math::v2u32 occupied_, size_t vertices_offset,
55 size_t indices_offset, uint16_t indice_count)
56 {
57 occupied = occupied_;
58 vertexesOffset = vertices_offset;
59 indexesOffset = indices_offset;
60 indexCount = indice_count;
61 frame_index = vk::Frame(-1);
62 }
63
64 void destroy_mark(vk::Frame frame_index) {
65 this->frame_index = frame_index;
66 }
67
68 [[nodiscard]] math::v2f
69 get_pos(math::v2u32 screen_half_extent) const
70 {
71 math::v2f offset = pos.offset;
72 switch (pos.text_offset_mode.x) {
73 using P = TextOffsetMode::X;
74 case P::LEFT: offset.x += occupied.x / 2;
75 break;
76 case P::RIGHT: offset.x -= occupied.x / 2;
77 break;
78 case P::CENTER:
79 break;
80 }
81 switch (pos.text_offset_mode.y) {
82 using P = TextOffsetMode::Y;
83 case P::TOP:
84 break;
85 case P::BOTTOM: offset.y -= occupied.y;
86 break;
87 case P::CENTER: offset.y -= occupied.y / 2;
88 break;
89 }
90
91 switch (pos.screen_offset_mode.x) {
92 using P = TextOffsetMode::X;
93 case P::LEFT: offset.x -= screen_half_extent.x;
94 break;
95 case P::RIGHT: offset.x += screen_half_extent.x;
96 break;
97 case P::CENTER:
98 break;
99 }
100 switch (pos.screen_offset_mode.y) {
101 using P = TextOffsetMode::Y;
102 case P::TOP: offset.y -= screen_half_extent.y;
103 break;
104 case P::BOTTOM: offset.y += screen_half_extent.y;
105 break;
106 case P::CENTER:
107 break;
108 }
109 return offset;
110 }
111
112
113 math::v2u32 occupied;
114 TextPosition pos;
115 uint16_t size;
116
117 DeviceBufferPart buffer;
118
119 size_t vertexesOffset;
120 size_t indexesOffset;
121 uint32_t indexCount;
122 vk::Frame frame_index;
123
124 GlyphManager *p_parent;
125
126 uint16_t unique_glyph_count;
127 uint32_t unique_glyph_ids[];
128 };
File src/graphics/resources/data.h deleted (index da00334..0000000)
1 #pragma once
2 #include "../../device/allocator/buffer_allocator.h"
3 #include "state.h"
4
5 namespace jen {
6 struct WriteData {
7 void *p;
8 size_t size;
9 };
10 struct GpuData {
11 vk::DeviceBufferPart allocation;
12 ResourceState state;
13 WriteData source;
14 bool destroy_source;
15
16 void destroy_source_if_allowed() {
17 if (not destroy_source || source.p == nullptr)
18 return;
19 jl::deallocate(&source.p);
20 destroy_source = false;
21 source.p = nullptr;
22 }
23 };
24 }
File src/graphics/resources/state.h deleted (index 1e22a6b..0000000)
1 #pragma once
2 #include <cinttypes>
3
4 namespace jen {
5 enum class ResourceState : uint8_t {
6 LOADING = 0b00,
7 DONE = 0b01
8 };
9 }
File src/graphics/resources/text.h deleted (index 794ec4b..0000000)
1 #pragma once
2
3 #include "../../device/allocator/buffer_allocator.h"
4 #include "../cmd_data.h"
5 #include <jrf/image.h>
6 #include <jlib/rarray.h>
7
8 namespace jen {
9 struct GlyphManager;
10
11 struct Text
12 {
13 struct OffsetMode {
14 enum class X : uint8_t { LEFT, CENTER, RIGHT } x;
15 enum class Y : uint8_t { TOP, CENTER, BOTTOM } y;
16 };
17
18 enum class Layout : uint8_t { LEFT, CENTER, RIGHT };
19
20 using Chars = jl::rarray<const uint32_t>;
21 using Colors_RGBA = jl::rarray<const uint32_t>;
22
23 [[nodiscard]] static Text* new_() { return nullptr; }
24
25 void
26 fill(math::v2u32 occupied_, size_t vertices_offset,
27 size_t indices_offset, uint16_t indice_count)
28 {
29 occupied = occupied_;
30 vertexesOffset = vertices_offset;
31 indexesOffset = indices_offset;
32 indexCount = indice_count;
33 frame_index = vk::Frame(-1);
34 }
35
36 void destroy_mark(vk::Frame frame_index) {
37 this->frame_index = frame_index;
38 }
39
40 [[nodiscard]] math::v2f
41 get_pos(math::v2u32 screen_half_extent) const
42 {
43 math::v2f offset = pos.offset;
44 switch (pos.text_offset_mode.x) {
45 using P = Text::OffsetMode::X;
46 case P::LEFT: offset.x += occupied.x / 2;
47 break;
48 case P::RIGHT: offset.x -= occupied.x / 2;
49 break;
50 case P::CENTER:
51 break;
52 }
53 switch (pos.text_offset_mode.y) {
54 using P = Text::OffsetMode::Y;
55 case P::TOP:
56 break;
57 case P::BOTTOM: offset.y -= occupied.y;
58 break;
59 case P::CENTER: offset.y -= occupied.y / 2;
60 break;
61 }
62
63 switch (pos.screen_offset_mode.x) {
64 using P = Text::OffsetMode::X;
65 case P::LEFT: offset.x -= screen_half_extent.x;
66 break;
67 case P::RIGHT: offset.x += screen_half_extent.x;
68 break;
69 case P::CENTER:
70 break;
71 }
72 switch (pos.screen_offset_mode.y) {
73 using P = Text::OffsetMode::Y;
74 case P::TOP: offset.y -= screen_half_extent.y;
75 break;
76 case P::BOTTOM: offset.y += screen_half_extent.y;
77 break;
78 case P::CENTER:
79 break;
80 }
81 return offset;
82 }
83
84 struct Position {
85 math::v2f offset;
86 OffsetMode text_offset_mode;
87 OffsetMode screen_offset_mode;
88 };
89
90 math::v2u32 occupied;
91 Position pos;
92 uint16_t size;
93
94 vk::DeviceBufferPart buffer;
95
96 size_t vertexesOffset;
97 size_t indexesOffset;
98 uint32_t indexCount;
99 vk::Frame frame_index;
100
101 GlyphManager *p_parent;
102
103 uint16_t unique_glyph_count;
104 uint32_t unique_glyph_ids[];
105 };
106 }
File src/graphics/resources/texture.h deleted (index 11b3715..0000000)
1 #pragma once
2
3 #include "../draw_stages/descriptors.h"
4 #include "../draw_stages/gpu_image.h"
5 #include "state.h"
6
7 #include <jrf/image.h>
8
9 namespace jen {
10 struct GpuTexture {
11 void init(const jrf::Image *p_src, bool destroy_source) {
12 state = ResourceState::LOADING;
13 is_source_destroy_allowed = destroy_source;
14 source = *p_src;
15 mip_levels = mip_level(p_src->extent.width, p_src->extent.height);
16 }
17 void destroy_source_if_allowed() {
18 if (is_source_destroy_allowed) {
19 source.destroy();
20 is_source_destroy_allowed = false;
21 }
22 }
23 static uint32_t mip_level(uint32_t width, uint32_t height) {
24 return uint32_t(std::floor(std::log2(jl::max(width, height)))) + 1;
25 }
26 vk::GpuImage<vk::GpuImageMode::VIEW> gpu_im;
27 vk::Descriptors::Textures::Set descriptor;
28 ResourceState state;
29 uint32_t mip_levels;
30 bool is_source_destroy_allowed;
31 jrf::Image source;
32 };
33 }
File src/graphics/settings.h deleted (index 6b101af..0000000)
1 //
2 // Created by damir on 6/16/18.
3 //
4 #pragma once
5
6 extern "C" {
7 #include <vulkan/vulkan.h>
8 }
9 #include <jlib/array.h>
10 #include <vkw/pipeline.h>
11 #include <math/vector.h>
12 #include "../instance/window.h"
13
14 namespace jen {
15 struct GraphicsSettings
16 {
17 struct DebugOverlay {
18 bool is_enabled;
19 bool is_visible;
20 Key::Board toggle_key;
21 const char *font_path;
22 };
23
24 enum class Shading : uint32_t {
25 DEFAULT,
26 NO_LIGHTING,
27 DEBUG_TEXTURE_COORDINATES,
28 DEBUG_CLUSTERS_DEPTH,
29 DEBUG_CLUSTERS_NUM_LIGHTS,
30 COUNT
31 };
32
33 enum class Filter : uint32_t { _1, _16, _25, _32, _64, _100, _128 };
34 struct Shadow {
35 float bias = 0.05f;
36 Filter pcss_search = Filter::_16;
37 Filter pcf = Filter::_32;
38 uint32_t extent = 512;
39 };
40
41 enum class DrawMode : uint8_t {
42 DEFAULT, WIREFRAME, POINTS
43 };
44
45 DrawMode draw_mode;
46 vkw::CullMode cull_mode;
47 Shading shading = Shading::DEFAULT;
48 Shadow shadows;
49 vkw::Samples multisampling;
50 bool is_vSync_enabled;
51 bool wait_for_gpu_frame_draw;
52 bool wait_for_monitor;
53 bool is_debug_normals_visible;
54 bool is_debug_depth_cube_visible;
55 DebugOverlay debug_overlay;
56
57 [[nodiscard]] bool operator ==(const GraphicsSettings& settings) const {
58 return memcmp(this, &settings, sizeof(*this)) == 0;
59 }
60 [[nodiscard]] bool operator !=(const GraphicsSettings& settings) const {
61 return not operator==(settings);
62 }
63 };
64 }
65
66
67
File src/instance/controls.h deleted (index 765ec67..0000000)
1 #pragma once
2
3 #define GLFW_INCLUDE_VULKAN
4 #include <GLFW/glfw3.h>
5
6 namespace Key
7 {
8 enum State : uint8_t
9 {
10 OFF = GLFW_RELEASE,
11 ON = GLFW_PRESS
12 };
13 enum Board : uint16_t
14 {
15 kSPACE = GLFW_KEY_SPACE,
16 kMINUS = GLFW_KEY_MINUS,
17
18 k0 = GLFW_KEY_0,
19 k1 = GLFW_KEY_1,
20 k2 = GLFW_KEY_2,
21 k3 = GLFW_KEY_3,
22 k4 = GLFW_KEY_4,
23 k5 = GLFW_KEY_5,
24 k6 = GLFW_KEY_6,
25 k7 = GLFW_KEY_7,
26 k8 = GLFW_KEY_8,
27 k9 = GLFW_KEY_9,
28
29 kUp = GLFW_KEY_UP,
30 kDown = GLFW_KEY_DOWN,
31 kLeft = GLFW_KEY_LEFT,
32 kRight = GLFW_KEY_RIGHT,
33
34 kEQUAL = GLFW_KEY_EQUAL,
35
36 A = GLFW_KEY_A,
37 B = GLFW_KEY_B,
38 C = GLFW_KEY_C,
39 D = GLFW_KEY_D,
40 E = GLFW_KEY_E,
41 F = GLFW_KEY_F,
42 G = GLFW_KEY_G,
43 H = GLFW_KEY_H,
44 I = GLFW_KEY_I,
45 J = GLFW_KEY_J,
46 K = GLFW_KEY_K,
47 L = GLFW_KEY_L,
48 M = GLFW_KEY_M,
49 N = GLFW_KEY_N,
50 O = GLFW_KEY_O,
51 P = GLFW_KEY_P,
52 Q = GLFW_KEY_Q,
53 R = GLFW_KEY_R,
54 S = GLFW_KEY_S,
55 T = GLFW_KEY_T,
56 U = GLFW_KEY_U,
57 V = GLFW_KEY_V,
58 W = GLFW_KEY_W,
59 X = GLFW_KEY_X,
60 Y = GLFW_KEY_Y,
61 Z = GLFW_KEY_Z,
62
63 kESCAPE = GLFW_KEY_ESCAPE,
64
65 kBACKSPACE = GLFW_KEY_BACKSPACE,
66
67 kPAUSE = GLFW_KEY_PAUSE,
68
69 f1 = GLFW_KEY_F1,
70 f2 = GLFW_KEY_F2,
71 f3 = GLFW_KEY_F3,
72 f4 = GLFW_KEY_F4,
73 f5 = GLFW_KEY_F5,
74 f6 = GLFW_KEY_F6,
75 f7 = GLFW_KEY_F7,
76 f8 = GLFW_KEY_F8,
77 f9 = GLFW_KEY_F9,
78 f10 = GLFW_KEY_F10,
79 f11 = GLFW_KEY_F11,
80 f12 = GLFW_KEY_F12,
81
82 kCONTROL_L = GLFW_KEY_LEFT_CONTROL,
83 kCONTROL_R = GLFW_KEY_RIGHT_CONTROL
84 };
85
86 enum Mouse : uint8_t
87 {
88 m_1 = GLFW_MOUSE_BUTTON_1,
89 m_L = GLFW_MOUSE_BUTTON_LEFT,
90 m_R = GLFW_MOUSE_BUTTON_RIGHT,
91 m_M = GLFW_MOUSE_BUTTON_MIDDLE
92 };
93 };
File src/instance/instance.cpp changed (mode: 100644) (index 7801c43..8569a29)
1 1 #include "instance.h" #include "instance.h"
2 #include <jen/configuration.h>
2 3
3 4 [[nodiscard]] bool [[nodiscard]] bool
4 5 getRequiredInstanceExtensions(jl::rarray<const char*> *p_exs) { getRequiredInstanceExtensions(jl::rarray<const char*> *p_exs) {
 
... ... getRequiredInstanceExtensions(jl::rarray<const char*> *p_exs) {
19 20 } }
20 21
21 22 [[nodiscard]] jen::Result jen::Instance:: [[nodiscard]] jen::Result jen::Instance::
22 init(ModulesMask modules_mask,
23 const ThreadPoolSettings &tps,
24 const WindowSettings &ws) {
23 init(ModulesMask modules_mask, const Settings &settings)
24 {
25 25 jassert_release(modules_mask, "modules_mask cannot be 0"); jassert_release(modules_mask, "modules_mask cannot be 0");
26 26 Result res; Result res;
27 27 this->modules_mask = modules_mask; this->modules_mask = modules_mask;
28 28 if (modules_mask & ModulesFlag::GRAPHICS) { if (modules_mask & ModulesFlag::GRAPHICS) {
29 29 if (not window.init_glfw()) if (not window.init_glfw())
30 30 return vkw::ERROR_WINDOW_INITIALIZATION; return vkw::ERROR_WINDOW_INITIALIZATION;
31 if (not window.init({1200,700}, ws.p_title_str, false))
31 if (not window.init({1200,700}, settings.window.p_title_str, false))
32 32 return vkw::ERROR_WINDOW_INITIALIZATION; return vkw::ERROR_WINDOW_INITIALIZATION;
33 33
34 34 if (not getRequiredInstanceExtensions(&extensions)) { if (not getRequiredInstanceExtensions(&extensions)) {
 
... ... init(ModulesMask modules_mask,
48 48 else else
49 49 jassert_soft(false, "validation layers not supported\n"); jassert_soft(false, "validation layers not supported\n");
50 50 #endif #endif
51 res = instance.init(layers, {extensions.begin(),extensions.count()});
51 vkw::ProductInfo app; {
52 app.p_name_str = settings.application.p_name_str;
53 auto &app_v = settings.application.version;
54 app.version = vkw::Version(app_v.major, app_v.minor, app_v.patch);
55 }
56 vkw::ProductInfo engine; {
57 engine.p_name_str = JEN_NAME;
58 engine.version = vkw::Version{
59 JEN_VERSION_MAJOR, JEN_VERSION_MINOR, JEN_VERSION_PATCH
60 };
61 }
62
63 res = instance.init(layers, {extensions.begin(),extensions.count()},
64 vkw::VULKAN_API_1_1, app, engine);
52 65 if (res != VK_SUCCESS) if (res != VK_SUCCESS)
53 66 goto CANCEL_EXTENSION; goto CANCEL_EXTENSION;
54 67 if (window.p_window != nullptr) { if (window.p_window != nullptr) {
 
... ... init(ModulesMask modules_mask,
61 74 #ifdef JEN_VLK_VALIDATION #ifdef JEN_VLK_VALIDATION
62 75 (void)callback.init(instance); (void)callback.init(instance);
63 76 #endif #endif
64
65 if (not thread_pool
66 .run(tps.threads_count == 0 ? jth::cpu_number() : tps.threads_count,
67 tps.queues_count)) {
68 res = VK_ERROR_OUT_OF_HOST_MEMORY;
69 goto CANCEL_CALLBACK;
77 {
78 auto &tps = settings.thread_pool;
79 if (not thread_pool
80 .run(tps.threads_count == 0 ? jth::cpu_number() : tps.threads_count,
81 tps.queues_count)) {
82 res = VK_ERROR_OUT_OF_HOST_MEMORY;
83 goto CANCEL_CALLBACK;
84 }
70 85 } }
71 86 return res; return res;
72 87 CANCEL_CALLBACK: CANCEL_CALLBACK:
File src/instance/instance.h changed (mode: 100644) (index 6264785..1408958)
1 1 #pragma once #pragma once
2 2
3 3 #include <vkw/instance.h> #include <vkw/instance.h>
4 #include <jen/result.h>
4 5 #include "debug.h" #include "debug.h"
5 #include "window.h"
6 #include <jen/window.h>
6 7 #include "jlib/thread_pool.h" #include "jlib/thread_pool.h"
8 #include <jen/settings.h>
7 9
8 10 namespace jen namespace jen
9 11 { {
10 struct ThreadPoolSettings {
11 struct Indices {
12 uint32_t drawFrame;
13 };
14
15 uint32_t queues_count;
16 uint32_t threads_count;
17 Indices queue_indices;
18 };
19 struct WindowSettings {
20 const char *p_title_str;
21 };
22
23 namespace ModulesFlag { enum T : uint32_t {
24 COMPUTE = 1,
25 GRAPHICS = 2
26 }; }
27 using ModulesMask = uint32_t;
28
29 using Result = vkw::Result;
30
31 12 struct Instance struct Instance
32 13 { {
33 14 [[nodiscard]] Result [[nodiscard]] Result
34 init(ModulesMask, const ThreadPoolSettings&, const WindowSettings&);
15 init(ModulesMask, const Settings&);
35 16
36 17 void destroy(); void destroy();
37 18
File src/instance/window.h deleted (index 157efbc..0000000)
1 #pragma once
2
3 #include "controls.h"
4 #include <math/vector.h>
5 #include <vkw/surface.h>
6
7
8 struct Window
9 {
10 enum class CursorMode {
11 NORMAL = GLFW_CURSOR_NORMAL,
12 HIDDEN = GLFW_CURSOR_HIDDEN,
13 DISABLED = GLFW_CURSOR_DISABLED
14 };
15 struct InputMode {
16 CursorMode cursor;
17 };
18
19 using Cursor = math::v2d;
20 using Extent = math::vec2<int>;
21 constexpr static const Extent ExtentAny = { GLFW_DONT_CARE, GLFW_DONT_CARE };
22
23 [[nodiscard]] static bool init_glfw()
24 {
25 glfwSetErrorCallback([](int, const char* error) {
26 fprintf(stderr, "GLFW_ERROR: %s\n", error);
27 });
28 return glfwInit();
29 }
30
31 [[nodiscard]] bool init(Extent extent_, const char* title, bool is_visible_)
32 {
33 is_visible = is_visible_;
34 extent = extent_;
35
36 p_monitor = glfwGetPrimaryMonitor();
37 p_video_mode = glfwGetVideoMode(p_monitor);
38
39 glfwWindowHint(GLFW_CLIENT_API, GLFW_NO_API);
40 glfwWindowHint(GLFW_RESIZABLE, GLFW_TRUE);
41 glfwWindowHint(GLFW_VISIBLE, is_visible);
42 p_window = glfwCreateWindow(extent.x, extent.y, title, {}, {});
43
44
45 glfwSetWindowUserPointer(p_window, this);
46 glfwSetWindowSizeCallback(p_window, [](GLFWwindow* p_w, int w, int h) {
47 Window *p = reinterpret_cast<Window*>(glfwGetWindowUserPointer(p_w));
48 p->extent.x = w;
49 p->extent.y = h;
50 });
51 glfwSetFramebufferSizeCallback(p_window, [](GLFWwindow* p_w, int w, int h) {
52 Window *p = reinterpret_cast<Window*>(glfwGetWindowUserPointer(p_w));
53 p->framebuffer_extent.x = w;
54 p->framebuffer_extent.y = h;
55 });
56 glfwSetWindowPosCallback(p_window, [](GLFWwindow* p_w, int w, int h) {
57 Window *p = reinterpret_cast<Window*>(glfwGetWindowUserPointer(p_w));
58 p->position.x = w;
59 p->position.y = h;
60 });
61 glfwSetWindowIconifyCallback(p_window, [](GLFWwindow* p_w, int iconified) {
62 Window *p = reinterpret_cast<Window*>(glfwGetWindowUserPointer(p_w));
63 p->is_iconified = iconified;
64 });
65 glfwSetWindowFocusCallback(p_window, [](GLFWwindow* p_w, int focus) {
66 Window *p = reinterpret_cast<Window*>(glfwGetWindowUserPointer(p_w));
67 p->is_focused = focus;
68 });
69 glfwSetWindowRefreshCallback(p_window, [](GLFWwindow *p_w) {
70 Window *p = reinterpret_cast<Window*>(glfwGetWindowUserPointer(p_w));
71 p->is_damaged = true;
72 });
73 glfwSetCursorPosCallback(p_window, [](GLFWwindow *p_w, double x, double y) {
74 Window *p = reinterpret_cast<Window*>(glfwGetWindowUserPointer(p_w));
75 p->cursor = {x, y};
76 });
77
78 glfwGetCursorPos(p_window, &cursor.x, &cursor.y);
79 glfwGetWindowSize(p_window, &extent.x, &extent.y);
80 glfwGetFramebufferSize(p_window,
81 &framebuffer_extent.x, &framebuffer_extent.y);
82 glfwGetWindowPos(p_window, &position.x, &position.y);
83
84 #if GLFW_VERSION_MINOR > 2
85 if (glfwRawMouseMotionSupported())
86 glfwSetInputMode(p_window, GLFW_RAW_MOUSE_MOTION, GLFW_TRUE);
87 #endif
88
89 input_mode.cursor = get_cursor_mode();
90 return p_window != nullptr;
91 }
92
93 void set_visibility(bool is_visible_) {
94 is_visible = is_visible_;
95 if (is_visible)
96 glfwShowWindow(p_window);
97 else
98 glfwHideWindow(p_window);
99 }
100
101 void set_extent_limits(Extent min = ExtentAny, Extent max = ExtentAny) const {
102 glfwSetWindowSizeLimits(p_window, min.x, min.y, max.x, max.y);
103 }
104
105 [[nodiscard]] vkw::Result
106 create_surface(vkw::Instance ins, vkw::Surface *p_dst) const {
107 return glfwCreateWindowSurface(ins, p_window, nullptr, &p_dst->p_vk);
108 }
109
110 void destroy() {
111 glfwDestroyWindow(p_window);
112 }
113
114 [[nodiscard]] Key::State state(Key::Board key) const {
115 return Key::State(glfwGetKey(p_window, key));
116 }
117 [[nodiscard]] Key::State state(Key::Mouse key) const {
118 return Key::State(glfwGetMouseButton(p_window, key));
119 }
120 [[nodiscard]] bool is_on(Key::Board key) const {
121 return state(key) == Key::State::ON;
122 }
123
124 [[nodiscard]] bool is_on(Key::Mouse key) const {
125 return state(key) == Key::State::ON;
126 }
127
128 [[nodiscard]] bool is_off(Key::Board key) const {
129 return state(key) == Key::State::OFF;
130 }
131
132 [[nodiscard]] bool is_off(Key::Mouse key) const {
133 return state(key) == Key::State::OFF;
134 }
135
136 static void poll() {
137 glfwPollEvents();
138 }
139
140 static void wait() {
141 glfwWaitEvents();
142 }
143
144 [[nodiscard]] bool is_window_close_fired() const {
145 return glfwWindowShouldClose(p_window) == GLFW_TRUE;
146 }
147
148 void toggle_fullscreen() {
149 if (is_fullscreen)
150 set_windowed();
151 else
152 set_fullscreen();
153 }
154
155 void set_fullscreen() {
156 if (not is_fullscreen) {
157 old_window_data.extent = extent;
158 old_window_data.framebuffer_extent = framebuffer_extent;
159 old_window_data.position = position;
160 glfwSetWindowMonitor(p_window, p_monitor, 0, 0,
161 p_video_mode->width, p_video_mode->height,
162 GLFW_DONT_CARE);
163 is_fullscreen = true;
164 }
165 }
166
167 void set_windowed() {
168 if (is_fullscreen) {
169 glfwSetWindowMonitor(p_window, nullptr, old_window_data.position.x,
170 old_window_data.position.y,
171 old_window_data.extent.x, old_window_data.extent.y,
172 GLFW_DONT_CARE);
173 is_fullscreen = false;
174 }
175 }
176
177 [[nodiscard]] int refresh_rate() {
178 auto monitor = glfwGetPrimaryMonitor();
179 const GLFWvidmode* mode = glfwGetVideoMode(monitor);
180 return mode->refreshRate;
181 }
182
183 void window_close_fire() {
184 glfwSetWindowShouldClose(p_window, GLFW_TRUE);
185 }
186
187 void set_cursor_mode(CursorMode mode) {
188 input_mode.cursor = mode;
189 glfwSetInputMode(p_window, GLFW_CURSOR, int(input_mode.cursor));
190 }
191
192 [[nodiscard]] CursorMode get_cursor_mode() const {
193 return CursorMode(glfwGetInputMode(p_window, GLFW_CURSOR));
194 }
195
196 operator GLFWwindow* () { return p_window; }
197
198 const GLFWvidmode *p_video_mode;
199 GLFWmonitor *p_monitor;
200 GLFWwindow *p_window;
201
202 struct OldData {
203 Extent extent;
204 Extent framebuffer_extent;
205 Extent position;
206 };
207
208 InputMode input_mode;
209 Extent extent;
210 Extent framebuffer_extent;
211 Extent position;
212 OldData old_window_data;
213
214
215 bool is_iconified = false;
216 bool is_visible = true;
217 bool is_focused = false;
218 bool is_fullscreen = false;
219 bool is_damaged = false;
220
221 uint8_t ___reserved[7];
222 Cursor cursor;
223 };
File src/resource_manager/resource_manager.cpp renamed from src/graphics/jrl.cpp (similarity 74%) (mode: 100644) (index b9a1b92..68bb892)
1 #include <jen/jrl.h>
1 #include "resource_manager.h"
2 2
3 using RM = jen::ResourceManager;
3 namespace jen::detail
4 {
5 void destroy(RM_Image *p, ModuleGraphics mg) {
6 mg.destroy(p->p_data, true);
7 }
8 void destroy(RM_Vertices *p, ModuleGraphics mg) {
9 mg.destroy(p->p_data, true);
10 }
11 void destroy(RM_Indices *p, ModuleGraphics mg) {
12 mg.destroy(p->p_data, true);
13 }
14
15 template<typename T>
16 void destroy(ResRef<T> *p, ModuleGraphics mg) {
17 if (p->is_data)
18 destroy(&p->u.res, mg);
19 else
20 p->u.path.destroy();
21 }
22
23 void destroy(RM_Mesh *p, ModuleGraphics mg) {
24 destroy(&p->ver, mg);
25 destroy(&p->ind, mg);
26 }
27
28 void destroy(RM_Model *p, ModuleGraphics mg) {
29 destroy(&p->mesh, mg);
30 destroy(&p->image, mg);
31 }
32
33 void destroy(RM_Scene *p, ModuleGraphics = {}) {
34 p->destroy();
35 }
4 36
5 void RM::init(ModuleGraphics *p_mg) {
6 p_moduleGraphics = p_mg;
37 template<jrf::ResourceType RT>
38 void ResHandle<RT>::destroy(ModuleGraphics mg) {
39 detail::destroy(&res, mg);
40 path.destroy();
41 }
42 }
43
44 using RMD = jen::ResourceManagerData;
45 using RMM = jen::ModuleResourceManager;
46
47 void RMD::init(ModuleGraphics mg) {
48 moduleGraphics = mg;
7 49 images = {}; images = {};
8 50 vertices = {}; vertices = {};
9 51 indices = {}; indices = {};
 
... ... void RM::init(ModuleGraphics *p_mg) {
11 53 models = {}; models = {};
12 54 scenes = {}; scenes = {};
13 55 } }
14 void RM::destroy() {
15 images .destroy(&decltype(images) ::item::destroy, p_moduleGraphics);
16 vertices.destroy(&decltype(vertices)::item::destroy, p_moduleGraphics);
17 indices .destroy(&decltype(indices) ::item::destroy, p_moduleGraphics);
18 meshes .destroy(&decltype(meshes) ::item::destroy, p_moduleGraphics);
19 models .destroy(&decltype(models) ::item::destroy, p_moduleGraphics);
20 scenes .destroy(&decltype(scenes) ::item::destroy, nullptr);
56 void RMD::destroy() {
57 images .destroy(&decltype(images) ::item::destroy, moduleGraphics);
58 vertices.destroy(&decltype(vertices)::item::destroy, moduleGraphics);
59 indices .destroy(&decltype(indices) ::item::destroy, moduleGraphics);
60 meshes .destroy(&decltype(meshes) ::item::destroy, moduleGraphics);
61 models .destroy(&decltype(models) ::item::destroy, moduleGraphics);
62 scenes .destroy(&decltype(scenes) ::item::destroy, moduleGraphics);
21 63 } }
22 64
23 65 ///*****************************RENDER***************************************/// ///*****************************RENDER***************************************///
 
... ... print_indices(uint8_t* p_ind, uint64_t count, jrf::IndexFormat format)
57 99 #endif #endif
58 100
59 101 [[nodiscard]] vkw::Result [[nodiscard]] vkw::Result
60 create_render(jen::ModuleGraphics *p_mg,
61 jrf::Vertices *p_jrf, jen::VertexData *p)
102 create_render(jen::ModuleGraphics mg, jrf::Vertices *p_jrf, jen::VertexData *p)
62 103 { {
63 using namespace jen::vk;
64 104 using JAT = jrf::Vertices::AttributeType; using JAT = jrf::Vertices::AttributeType;
65 105 p->offsets = {}; p->offsets = {};
66 106
 
... ... create_render(jen::ModuleGraphics *p_mg,
96 136 jen::WriteData wd; jen::WriteData wd;
97 137 wd.size = p_jrf->data_size; wd.size = p_jrf->data_size;
98 138 wd.p = p_jrf->p_data; wd.p = p_jrf->p_data;
99 auto res = p_mg->create(wd, &p->p_data, true);
139 auto res = mg.create(wd, &p->p_data, true);
100 140 if (res == VK_SUCCESS) if (res == VK_SUCCESS)
101 141 *p_jrf = {}; *p_jrf = {};
102 142 return res; return res;
103 143 } }
104 144
105 145 [[nodiscard]] vkw::Result [[nodiscard]] vkw::Result
106 create_render(jen::ModuleGraphics *p_mg, jrf::Indices *p_jrf, jen::IndexData *p)
146 create_render(jen::ModuleGraphics mg, jrf::Indices *p_jrf, jen::IndexData *p)
107 147 { {
108 148 p->offset = 0; p->offset = 0;
109 149 p->type = p_jrf->format == jrf::IndexFormat::U16 p->type = p_jrf->format == jrf::IndexFormat::U16
110 ? vkw::IndexType::U16
111 : vkw::IndexType::U32;
150 ? jen::IndexType::U16 : jen::IndexType::U32;
112 151 #ifdef JEN_DBG_JRFMESH #ifdef JEN_DBG_JRFMESH
113 152 printf("---- %lu indices ----\n", p_jrf->size / p_jrf->format); printf("---- %lu indices ----\n", p_jrf->size / p_jrf->format);
114 153 if (p_jrf->size < 200) if (p_jrf->size < 200)
 
... ... create_render(jen::ModuleGraphics *p_mg, jrf::Indices *p_jrf, jen::IndexData *p)
118 157 p->count = uint32_t(p_jrf->size / p_jrf->format); p->count = uint32_t(p_jrf->size / p_jrf->format);
119 158 wd.size = p_jrf->size; wd.size = p_jrf->size;
120 159 wd.p = p_jrf->p_data; wd.p = p_jrf->p_data;
121 auto res = p_mg->create(wd, &p->p_data, true);
160 auto res = mg.create(wd, &p->p_data, true);
122 161 if (res == VK_SUCCESS) if (res == VK_SUCCESS)
123 162 *p_jrf = {}; *p_jrf = {};
124 163 return res; return res;
125 164 } }
126 165
127 166 [[nodiscard]] vkw::Result [[nodiscard]] vkw::Result
128 create_render(jen::ModuleGraphics *p_mg, jrf::Image *p_jrf, jen::TextureData *p)
167 create_render(jen::ModuleGraphics mg, jrf::Image *p_jrf, jen::TextureData *p)
129 168 { {
130 169 p->layer_index = 0; p->layer_index = 0;
131 auto res = p_mg->create(p_jrf, &p->p_data, true);
170 auto res = mg.create(p_jrf, &p->p_data, true);
132 171 if (res == VK_SUCCESS) if (res == VK_SUCCESS)
133 172 *p_jrf = {}; *p_jrf = {};
134 173 return res; return res;
 
... ... create_render(jen::ModuleGraphics *p_mg, jrf::Image *p_jrf, jen::TextureData *p)
137 176 ///*****************************RESOURCES************************************/// ///*****************************RESOURCES************************************///
138 177
139 178 template<jrf::ResourceType RT> template<jrf::ResourceType RT>
140 [[nodiscard]] bool RM::
179 [[nodiscard]] bool RMD::
141 180 find_and_apply_ref_count(const jl::string_ro &path, detail::ResHandle<RT>*p_dst) find_and_apply_ref_count(const jl::string_ro &path, detail::ResHandle<RT>*p_dst)
142 181 { {
143 182 Storage<RT> *p_storage; Storage<RT> *p_storage;
 
... ... find_and_apply_ref_count(const jl::string_ro &path, detail::ResHandle<RT>*p_dst)
152 191
153 192
154 193 template<jrf::ResourceType RT> template<jrf::ResourceType RT>
155 [[nodiscard]] vkw::Result
156 RM::insert(
194 [[nodiscard]] jen::Result RMD::
195 insert(
157 196 jl::string *p_path, jl::string *p_path,
158 197 typename jrf::Resource<RT>::T *p_jrf_resource, typename jrf::Resource<RT>::T *p_jrf_resource,
159 198 detail::ResHandle<RT> *p_dst) detail::ResHandle<RT> *p_dst)
 
... ... typename jrf::Resource<RT>::T *p_jrf_resource,
163 202 rh.use_count = 1; rh.use_count = 1;
164 203
165 204 vkw::Result res; vkw::Result res;
166 res = create_render(p_moduleGraphics, p_jrf_resource, &rh.res);
205 res = create_render(moduleGraphics, p_jrf_resource, &rh.res);
167 206 if (res != VK_SUCCESS) if (res != VK_SUCCESS)
168 207 return res; return res;
169 208
 
... ... typename jrf::Resource<RT>::T *p_jrf_resource,
175 214 } }
176 215 else else
177 216 { {
178 p_moduleGraphics->destroy(rh.res.p_data, true);
217 moduleGraphics.destroy(rh.res.p_data, true);
179 218 return VK_ERROR_OUT_OF_HOST_MEMORY; return VK_ERROR_OUT_OF_HOST_MEMORY;
180 219 } }
181 220 } }
182 221
183 222 template<jrf::ResourceType RT> template<jrf::ResourceType RT>
184 RM::Result RM::
223 RMD::Result RMD::
185 224 create_res(jl::string *p_path, detail::ResHandle<RT> *p_dst) create_res(jl::string *p_path, detail::ResHandle<RT> *p_dst)
186 225 { {
187 226 jrf::Result jrf_result; jrf::Result jrf_result;
 
... ... create_res(jl::string *p_path, detail::ResHandle<RT> *p_dst)
191 230 if (jrf_result != jrf::Result::SUCCESS) if (jrf_result != jrf::Result::SUCCESS)
192 231 return { vkw::ERROR_JRF, jrf_result }; return { vkw::ERROR_JRF, jrf_result };
193 232
194 vkw::Result vk_result;
195
196 vk_result = insert(p_path, &jrf_resource, p_dst);
197 if (vk_result != VK_SUCCESS)
233 jen::Result result = insert(p_path, &jrf_resource, p_dst);
234 if (result != VK_SUCCESS)
198 235 jrf_resource.destroy(); jrf_resource.destroy();
199 236
200 return { vk_result, jrf_result };
237 return { result, jrf_result };
201 238 } }
202 239
203 240 template<jrf::ResourceType RT> template<jrf::ResourceType RT>
204 RM::Result RM::
241 RMD::Result RMD::
205 242 create(jl::string *p_path_move, detail::ResHandle<RT> **pp_dst) { create(jl::string *p_path_move, detail::ResHandle<RT> **pp_dst) {
206 243 if (find_and_apply_ref_count(*p_path_move, pp_dst)) { if (find_and_apply_ref_count(*p_path_move, pp_dst)) {
207 244 p_path_move->destroy(); p_path_move->destroy();
 
... ... create(jl::string *p_path_move, detail::ResHandle<RT> **pp_dst) {
212 249 } }
213 250
214 251 template<jrf::ResourceType RT> template<jrf::ResourceType RT>
215 [[nodiscard]] RM::Result RM::
252 [[nodiscard]] RMD::Result RMD::
216 253 create(const jl::string_ro &path, detail::ResHandle<RT> *p_dst) { create(const jl::string_ro &path, detail::ResHandle<RT> *p_dst) {
217 254 if (find_and_apply_ref_count(path, p_dst)) if (find_and_apply_ref_count(path, p_dst))
218 255 return {}; return {};
 
... ... create(const jl::string_ro &path, detail::ResHandle<RT> *p_dst) {
226 263 } }
227 264
228 265 template<jrf::ResourceType RT> template<jrf::ResourceType RT>
229 [[nodiscard]] RM::Result RM::
266 [[nodiscard]] RMD::Result RMD::
230 267 create(const jl::string_ro &path, Resource<RT> *p_dst) { create(const jl::string_ro &path, Resource<RT> *p_dst) {
231 268 detail::ResHandle<RT> rh; detail::ResHandle<RT> rh;
232 269 Result res = create(path, &rh); Result res = create(path, &rh);
 
... ... create(const jl::string_ro &path, Resource<RT> *p_dst) {
236 273 return {}; return {};
237 274 } }
238 275
239 [[nodiscard]] RM::Result RM::
276 [[nodiscard]] RMM::Result RMM::
240 277 create(const jl::string_ro &path, Resource<jrf::IMAGE> *p_dst) { create(const jl::string_ro &path, Resource<jrf::IMAGE> *p_dst) {
241 return create<>(path, p_dst);
278 return p->create<>(path, p_dst);
242 279 } }
243 [[nodiscard]] RM::Result RM::
280 [[nodiscard]] RMM::Result RMM::
244 281 create(const jl::string_ro &path, Resource<jrf::VERTICES> *p_dst) { create(const jl::string_ro &path, Resource<jrf::VERTICES> *p_dst) {
245 return create<>(path, p_dst);
282 return p->create<>(path, p_dst);
246 283 } }
247 [[nodiscard]] RM::Result RM::
284 [[nodiscard]] RMM::Result RMM::
248 285 create(const jl::string_ro &path, Resource<jrf::INDICES> *p_dst) { create(const jl::string_ro &path, Resource<jrf::INDICES> *p_dst) {
249 return create<>(path, p_dst);
286 return p->create<>(path, p_dst);
250 287 } }
251 288
252 289
253 290 template<jrf::ResourceType RT> template<jrf::ResourceType RT>
254 void RM::destroy(const jl::string_ro &path) {
291 void RMD::destroy(const jl::string_ro &path) {
255 292 if (path.begin() == nullptr) if (path.begin() == nullptr)
256 293 return; return;
257 294 Storage<RT> *p_storage; Storage<RT> *p_storage;
 
... ... void RM::destroy(const jl::string_ro &path) {
260 297 detail::ResHandle<RT> *p_handle; detail::ResHandle<RT> *p_handle;
261 298 if (storage.find(path, &p_handle)) { if (storage.find(path, &p_handle)) {
262 299 if (--p_handle->use_count == 0) if (--p_handle->use_count == 0)
263 storage.remove(p_handle, &detail::ResHandle<RT>::destroy,
264 p_moduleGraphics);
300 storage.remove(p_handle, &detail::ResHandle<RT>::destroy, moduleGraphics);
265 301 return; return;
266 302 } }
267 303 fprintf(stderr, "ResourceManager::destroy - " fprintf(stderr, "ResourceManager::destroy - "
 
... ... void RM::destroy(const jl::string_ro &path) {
269 305 path.begin()); path.begin());
270 306 } }
271 307
272 void RM::destroy(Resource<jrf::IMAGE> *p_res) {
273 destroy<jrf::IMAGE>(p_res->path); p_res->path = {};
308 void RMM::destroy(Resource<jrf::IMAGE> *p_res) {
309 p->destroy<jrf::IMAGE>(p_res->path); p_res->path = {};
274 310 } }
275 void RM::destroy(Resource<jrf::VERTICES> *p_res) {
276 destroy<jrf::VERTICES>(p_res->path); p_res->path = {};
311 void RMM::destroy(Resource<jrf::VERTICES> *p_res) {
312 p->destroy<jrf::VERTICES>(p_res->path); p_res->path = {};
277 313 } }
278 void RM::destroy(Resource<jrf::INDICES> *p_res) {
279 destroy<jrf::INDICES>(p_res->path); p_res->path = {};
314 void RMM::destroy(Resource<jrf::INDICES> *p_res) {
315 p->destroy<jrf::INDICES>(p_res->path); p_res->path = {};
280 316 } }
281 317
282 318 ///*****************************MESH*****************************************/// ///*****************************MESH*****************************************///
283 319
284 void RM::
320 void RMD::
285 321 mesh_resources(const jen::detail::RM_Mesh &mesh, mesh_resources(const jen::detail::RM_Mesh &mesh,
286 322 jen::VertexData *p_dst_ver, jen::IndexData *p_dst_ind) jen::VertexData *p_dst_ver, jen::IndexData *p_dst_ind)
287 323 { {
 
... ... mesh_resources(const jen::detail::RM_Mesh &mesh,
304 340 } }
305 341 } }
306 342
307 [[nodiscard]] bool RM::
343 [[nodiscard]] bool RMD::
308 344 find_and_apply_ref_count(const jl::string_ro &mesh_path, find_and_apply_ref_count(const jl::string_ro &mesh_path,
309 345 jen::VertexData *p_dst_ver, jen::IndexData *p_dst_ind) jen::VertexData *p_dst_ver, jen::IndexData *p_dst_ind)
310 346 { {
 
... ... find_and_apply_ref_count(const jl::string_ro &mesh_path,
318 354 } }
319 355
320 356 template<jrf::ResourceType RT, typename RM_res, typename Render_Res> template<jrf::ResourceType RT, typename RM_res, typename Render_Res>
321 [[nodiscard]] RM::Result RM::
357 [[nodiscard]] RMD::Result RMD::
322 358 create_res_ref(jrf::Data<typename jrf::Resource<RT>::T> *p_jrf, create_res_ref(jrf::Data<typename jrf::Resource<RT>::T> *p_jrf,
323 359 jen::detail::ResRef<RM_res> *p_dst_ref, jen::detail::ResRef<RM_res> *p_dst_ref,
324 360 Render_Res *p_dst_rend) Render_Res *p_dst_rend)
 
... ... create_res_ref(jrf::Data<typename jrf::Resource<RT>::T> *p_jrf,
326 362 auto &ref = *p_dst_ref; auto &ref = *p_dst_ref;
327 363 if (p_jrf->mode == jrf::ResourceMode::DATA) { if (p_jrf->mode == jrf::ResourceMode::DATA) {
328 364 ref.is_data = true; ref.is_data = true;
329 vkw::Result vres = create_render(p_moduleGraphics,
330 &p_jrf->u.data, &ref.u.res);
331 if (vres != VK_SUCCESS)
332 return {vres, {}};
365 jen::Result res = create_render(moduleGraphics, &p_jrf->u.data, &ref.u.res);
366 if (res != VK_SUCCESS)
367 return {res, {}};
333 368 *p_dst_rend = ref.u.res; *p_dst_rend = ref.u.res;
334 369 return {}; return {};
335 370 } }
 
... ... create_res_ref(jrf::Data<typename jrf::Resource<RT>::T> *p_jrf,
338 373 ref.u.path = p_jrf->u.path; ref.u.path = p_jrf->u.path;
339 374 p_jrf->u.path = {}; p_jrf->u.path = {};
340 375 jen::Resource<RT> resource; jen::Resource<RT> resource;
341 RM::Result res = create(ref.u.path, &resource);
376 RMD::Result res = create(ref.u.path, &resource);
342 377 if (not res) if (not res)
343 378 return res; return res;
344 379 *p_dst_rend = resource.res; *p_dst_rend = resource.res;
 
... ... create_res_ref(jrf::Data<typename jrf::Resource<RT>::T> *p_jrf,
352 387 } }
353 388 } }
354 389
355 [[nodiscard]] RM::Result RM::
390 [[nodiscard]] RMD::Result RMD::
356 391 create_mesh_res(jrf::Mesh *p_jrf_src, jen::detail::RM_Mesh *p_rm_mesh, create_mesh_res(jrf::Mesh *p_jrf_src, jen::detail::RM_Mesh *p_rm_mesh,
357 392 jen::VertexData *p_dst_ver, jen::IndexData *p_dst_ind) jen::VertexData *p_dst_ver, jen::IndexData *p_dst_ind)
358 393 { {
359 RM::Result res;
394 RMD::Result res;
360 395
361 396 // TODO create single GpuData for both ver and ind. // TODO create single GpuData for both ver and ind.
362 397 res = create_res_ref<jrf::VERTICES>(&p_jrf_src->vert, res = create_res_ref<jrf::VERTICES>(&p_jrf_src->vert,
 
... ... create_mesh_res(jrf::Mesh *p_jrf_src, jen::detail::RM_Mesh *p_rm_mesh,
367 402 &p_rm_mesh->ind, p_dst_ind); &p_rm_mesh->ind, p_dst_ind);
368 403 } }
369 404
370 [[nodiscard]] RM::Result RM::
405 [[nodiscard]] RMD::Result RMD::
371 406 create_mesh_res(jl::string *p_path, create_mesh_res(jl::string *p_path,
372 407 jen::VertexData *p_dst_ver, jen::IndexData *p_dst_ind) jen::VertexData *p_dst_ver, jen::IndexData *p_dst_ind)
373 408 { {
 
... ... create_mesh_res(jl::string *p_path,
381 416 *p_path = {}; *p_path = {};
382 417 rh.use_count = 1; rh.use_count = 1;
383 418
384 RM::Result res = create_mesh_res(&mesh, &rh.res, p_dst_ver, p_dst_ind);
419 RMD::Result res = create_mesh_res(&mesh, &rh.res, p_dst_ver, p_dst_ind);
385 420 if (not res) if (not res)
386 421 goto CANCEL; goto CANCEL;
387 422
 
... ... create_mesh_res(jl::string *p_path,
390 425 res = { VK_ERROR_OUT_OF_HOST_MEMORY, {}}; res = { VK_ERROR_OUT_OF_HOST_MEMORY, {}};
391 426 CANCEL: CANCEL:
392 427 mesh.destroy(); mesh.destroy();
393 rh.destroy(p_moduleGraphics);
428 rh.destroy(moduleGraphics);
394 429 return res; return res;
395 430 } }
396 431
397 [[nodiscard]] RM::Result RM::
432 [[nodiscard]] RMM::Result RMM::
398 433 create_mesh(const jl::string_ro &mesh_path, create_mesh(const jl::string_ro &mesh_path,
399 434 jen::VertexData *p_dst_ver, jen::IndexData *p_dst_ind) jen::VertexData *p_dst_ver, jen::IndexData *p_dst_ind)
400 435 { {
401 if (find_and_apply_ref_count(mesh_path, p_dst_ver, p_dst_ind))
436 if (p->find_and_apply_ref_count(mesh_path, p_dst_ver, p_dst_ind))
402 437 return {}; return {};
403 438 jl::string path; jl::string path;
404 439 if (not path.init(mesh_path)) if (not path.init(mesh_path))
405 440 return {VK_ERROR_OUT_OF_HOST_MEMORY, {}}; return {VK_ERROR_OUT_OF_HOST_MEMORY, {}};
406 auto res = create_mesh_res(&path, p_dst_ver, p_dst_ind);
441 auto res = p->create_mesh_res(&path, p_dst_ver, p_dst_ind);
407 442 if (not res) if (not res)
408 443 path.destroy(); path.destroy();
409 444 return res; return res;
410 445 } }
411 446
412 void RM::destroy_mesh(const jl::string_ro &mesh_path) {
447 void RMM::destroy_mesh(const jl::string_ro &mesh_path) {
413 448 if (mesh_path.begin() == nullptr) if (mesh_path.begin() == nullptr)
414 449 return; return;
415 450 detail::ResHandle<jrf::MESH> *p_handle; detail::ResHandle<jrf::MESH> *p_handle;
416 if (meshes.find(mesh_path, &p_handle)) {
451 if (p->meshes.find(mesh_path, &p_handle)) {
417 452 if (--p_handle->use_count == 0) { if (--p_handle->use_count == 0) {
418 453 auto &r = p_handle->res; auto &r = p_handle->res;
419 454 if (not r.ver.is_data) if (not r.ver.is_data)
420 destroy<jrf::VERTICES>(r.ver.u.path);
421 p_handle->destroy(p_moduleGraphics);
422 meshes.remove(p_handle);
455 p->destroy<jrf::VERTICES>(r.ver.u.path);
456 p_handle->destroy(p->moduleGraphics);
457 p->meshes.remove(p_handle);
423 458 } }
424 459 return; return;
425 460 } }
 
... ... void RM::destroy_mesh(const jl::string_ro &mesh_path) {
430 465
431 466 ///*****************************MODEL****************************************/// ///*****************************MODEL****************************************///
432 467
433 [[nodiscard]] RM::Result RM::
468 [[nodiscard]] RMD::Result RMD::
434 469 create_model_res(jl::string *p_path, jen::VertexData *p_dst_ver, create_model_res(jl::string *p_path, jen::VertexData *p_dst_ver,
435 470 jen::IndexData *p_dst_ind, jen::TextureData *p_dst_img) jen::IndexData *p_dst_ind, jen::TextureData *p_dst_img)
436 471 { {
 
... ... create_model_res(jl::string *p_path, jen::VertexData *p_dst_ver,
451 486 model.mesh.is_data = false; model.mesh.is_data = false;
452 487 model.mesh.u.path = jrf_model.mesh.u.path; model.mesh.u.path = jrf_model.mesh.u.path;
453 488 jrf_model.mesh.u.path = {}; jrf_model.mesh.u.path = {};
454 res = create_mesh(model.mesh.u.path, p_dst_ver, p_dst_ind);
489 res = ModuleResourceManager{this}
490 .create_mesh(model.mesh.u.path, p_dst_ver, p_dst_ind);
455 491 if (not res) if (not res)
456 492 goto CANCEL; goto CANCEL;
457 493 } }
 
... ... create_model_res(jl::string *p_path, jen::VertexData *p_dst_ver,
471 507 return {}; return {};
472 508 res = { VK_ERROR_OUT_OF_HOST_MEMORY, {} }; res = { VK_ERROR_OUT_OF_HOST_MEMORY, {} };
473 509 CANCEL: CANCEL:
474 m_handle.destroy(p_moduleGraphics);
510 m_handle.destroy(moduleGraphics);
475 511 jrf_model.destroy(); jrf_model.destroy();
476 512 return res; return res;
477 513 } }
478 514
479 [[nodiscard]] bool RM::
515 [[nodiscard]] bool RMD::
480 516 find_and_apply_ref_count(const jl::string_ro &model_path, find_and_apply_ref_count(const jl::string_ro &model_path,
481 517 jen::VertexData *p_dst_ver, jen::VertexData *p_dst_ver,
482 518 jen::IndexData *p_dst_ind, jen::IndexData *p_dst_ind,
 
... ... find_and_apply_ref_count(const jl::string_ro &model_path,
507 543 return false; return false;
508 544 } }
509 545
510 [[nodiscard]] RM::Result RM::
546 [[nodiscard]] RMM::Result RMM::
511 547 create_model(const jl::string_ro &model_path, jen::VertexData *p_dst_ver, create_model(const jl::string_ro &model_path, jen::VertexData *p_dst_ver,
512 548 jen::IndexData *p_dst_ind, jen::TextureData *p_dst_img) jen::IndexData *p_dst_ind, jen::TextureData *p_dst_img)
513 549 { {
514 if (find_and_apply_ref_count(model_path, p_dst_ver, p_dst_ind, p_dst_img))
550 if (p->find_and_apply_ref_count(model_path, p_dst_ver, p_dst_ind, p_dst_img))
515 551 return {}; return {};
516 552
517 553 jl::string path; jl::string path;
518 554 if (not path.init(model_path)) if (not path.init(model_path))
519 555 return {VK_ERROR_OUT_OF_HOST_MEMORY, {}}; return {VK_ERROR_OUT_OF_HOST_MEMORY, {}};
520 556
521 auto res = create_model_res(&path, p_dst_ver, p_dst_ind, p_dst_img);
557 auto res = p->create_model_res(&path, p_dst_ver, p_dst_ind, p_dst_img);
522 558 if (not res) if (not res)
523 559 path.destroy(); path.destroy();
524 560 return res; return res;
525 561 } }
526 562
527 void RM::destroy_model(const jl::string_ro &model_path) {
563 void RMM::destroy_model(const jl::string_ro &model_path) {
528 564 if (model_path.begin() == nullptr) if (model_path.begin() == nullptr)
529 565 return; return;
530 566 detail::ResHandle<jrf::MODEL> *p_handle; detail::ResHandle<jrf::MODEL> *p_handle;
531 if (models.find(model_path, &p_handle)) {
567 if (p->models.find(model_path, &p_handle)) {
532 568 if (--p_handle->use_count == 0) { if (--p_handle->use_count == 0) {
533 569 auto &model = p_handle->res; auto &model = p_handle->res;
534 570 if (not model.mesh.is_data) if (not model.mesh.is_data)
535 571 destroy_mesh(model.mesh.u.path); destroy_mesh(model.mesh.u.path);
536 572
537 573 if (not model.image.is_data) if (not model.image.is_data)
538 destroy<jrf::IMAGE>(model.image.u.path);
574 p->destroy<jrf::IMAGE>(model.image.u.path);
539 575
540 p_handle->destroy(p_moduleGraphics);
541 models.remove(p_handle);
576 p_handle->destroy(p->moduleGraphics);
577 p->models.remove(p_handle);
542 578 } }
543 579 return; return;
544 580 } }
 
... ... void RM::destroy_model(const jl::string_ro &model_path) {
550 586
551 587 ///*****************************SCENE****************************************/// ///*****************************SCENE****************************************///
552 588
553 [[nodiscard]] RM::Result RM::
589 [[nodiscard]] RMM::Result RMM::
554 590 create_scene(const jl::string_ro &path, jen::ShiftPO2 shift_scale, create_scene(const jl::string_ro &path, jen::ShiftPO2 shift_scale,
555 591 SceneData *p_dst) SceneData *p_dst)
556 592 { {
 
... ... create_scene(const jl::string_ro &path, jen::ShiftPO2 shift_scale,
596 632
597 633 if (rh.path.init(path)) { if (rh.path.init(path)) {
598 634 rh.use_count = 1; rh.use_count = 1;
599 if (scenes.insert(rh))
635 if (p->scenes.insert(rh))
600 636 return {}; return {};
601 637 rh.path = {}; rh.path = {};
602 638 } }
 
... ... CANCEL:
610 646 return res; return res;
611 647 } }
612 648
613 void RM::destroy_scene(const jl::string_ro &scene_path) {
649 void RMM::destroy_scene(const jl::string_ro &scene_path) {
614 650 if (scene_path.begin() == nullptr) if (scene_path.begin() == nullptr)
615 651 return; return;
616 652 detail::ResHandle<jrf::SCENE> *p_handle; detail::ResHandle<jrf::SCENE> *p_handle;
617 if (scenes.find(scene_path, &p_handle)) {
653 if (p->scenes.find(scene_path, &p_handle)) {
618 654 if (--p_handle->use_count == 0) { if (--p_handle->use_count == 0) {
619 655 auto &sc = p_handle->res; auto &sc = p_handle->res;
620 656 for (auto e : sc.entries) for (auto e : sc.entries)
621 destroy<jrf::MODEL>(e.model_path);
622 p_handle->destroy(p_moduleGraphics);
623 scenes.remove(p_handle);
657 p->destroy<jrf::MODEL>(e.model_path);
658 p_handle->destroy(p->moduleGraphics);
659 p->scenes.remove(p_handle);
624 660 } }
625 661 return; return;
626 662 } }
File src/resource_manager/resource_manager.h added (mode: 100644) (index 0000000..98f3eb2)
1 #pragma once
2 #include <jen/resource_manager.h>
3 #include <jlib/darray_mods.h>
4 #include <jrf/read.h>
5
6 namespace jen::detail {
7 template<jrf::ResourceType RT>
8 struct ResHandle : Resource<RT> {
9 using Resource<RT>::path;
10 using Resource<RT>::res;
11
12 void destroy(ModuleGraphics mg);
13
14 [[nodiscard]] bool
15 operator >= (const ResHandle &r) const { return path >= r.path; }
16 [[nodiscard]] bool
17 operator >= (const jl::string_ro &str) const { return str <= path; }
18 [[nodiscard]] bool
19 operator > (const jl::string_ro &str) const { return str < path; }
20
21 uint64_t use_count;
22 };
23 }
24
25 struct jen::ResourceManagerData
26 {
27 void init(ModuleGraphics mg);
28 void destroy();
29
30 private:
31 friend ModuleResourceManager;
32
33 using Result = ModuleResourceManager::Result;
34
35 template<jrf::ResourceType RT>
36 using Storage = jl::darray_sorted<detail::ResHandle<RT>>;
37
38 void get_storage(Storage<jrf::IMAGE> **p_p) { *p_p = &images; }
39 void get_storage(Storage<jrf::VERTICES> **p_p) { *p_p = &vertices;}
40 void get_storage(Storage<jrf::INDICES> **p_p) { *p_p = &indices; }
41 void get_storage(Storage<jrf::MODEL> **p_p) { *p_p = &models; }
42
43 template<jrf::ResourceType RT>
44 [[nodiscard]] jen::Result
45 insert( jl::string *p_path_move,
46 typename jrf::Resource<RT>::T *p_jrf_resource,
47 detail::ResHandle<RT> *p_dst);
48 template<jrf::ResourceType RT>
49 Result create_res(jl::string *p_path, detail::ResHandle<RT> *p_dst);
50 template<jrf::ResourceType RT>
51 Result create(const jl::string_ro &path, Resource<RT> *p_dst);
52 template<jrf::ResourceType RT>
53 Result create(const jl::string_ro &path, detail::ResHandle<RT> *p_dst);
54 template<jrf::ResourceType RT>
55 Result create(jl::string *p_path_move, detail::ResHandle<RT> **pp_dst);
56 template<jrf::ResourceType RT>
57 void destroy(const jl::string_ro &path);
58
59 template<jrf::ResourceType RT, typename RM_res, typename Render_Res>
60 Result
61 create_res_ref(jrf::Data<typename jrf::Resource<RT>::T> *p_jrf,
62 jen::detail::ResRef<RM_res> *p_dst,
63 Render_Res *p_dst2);
64
65 template<jrf::ResourceType RT>
66 [[nodiscard]] bool
67 find_and_apply_ref_count(const jl::string_ro &path,
68 detail::ResHandle<RT> *p_dst);
69
70 Result
71 create_mesh_res(jrf::Mesh *p_jrf_src,
72 jen::detail::RM_Mesh *p_rm_mesh,
73 jen::VertexData *p_dst_ver,
74 jen::IndexData *p_dst_ind);
75 void
76 mesh_resources(const jen::detail::RM_Mesh &mesh,
77 jen::VertexData *p_dst_ver,
78 jen::IndexData *p_dst_ind);
79 [[nodiscard]] bool
80 find_and_apply_ref_count(const jl::string_ro &mesh_path,
81 jen::VertexData *p_dst_ver,
82 jen::IndexData *p_dst_ind);
83
84 [[nodiscard]] bool
85 find_and_apply_ref_count(const jl::string_ro &model_path,
86 jen::VertexData *p_dst_ver,
87 jen::IndexData *p_dst_ind,
88 jen::TextureData *p_dst_img);
89
90 Result create_mesh_res(jl::string *p_path,
91 jen::VertexData *p_dst_ver,
92 jen::IndexData *p_dst_ind);
93
94 Result create_model_res(jl::string *p_path,
95 jen::VertexData *p_dst_ver,
96 jen::IndexData *p_dst_ind,
97 jen::TextureData *p_dst_img);
98
99 Storage<jrf::IMAGE> images;
100 Storage<jrf::VERTICES> vertices;
101 Storage<jrf::INDICES> indices;
102 Storage<jrf::MESH> meshes;
103 Storage<jrf::MODEL> models;
104 Storage<jrf::SCENE> scenes;
105 ModuleGraphics moduleGraphics;
106 };
107
108
File src/settings.h deleted (index b235a8b..0000000)
1 #pragma once
2 #include "instance/instance.h"
3 #include "graphics/settings.h"
4
5 namespace jen { struct Settings; }
6
7 struct jen::Settings {
8 ThreadPoolSettings thread_pool;
9 WindowSettings window;
10 GraphicsSettings graphics;
11
12 constexpr void set_default() {
13 thread_pool.queues_count = 1;
14 thread_pool.threads_count = 0;
15 thread_pool.queue_indices.drawFrame = 0;
16
17 window.p_title_str = "";
18
19 graphics.draw_mode = GraphicsSettings::DrawMode::DEFAULT;
20 graphics.cull_mode = vkw::CullMode::BACK;
21 graphics.shading = GraphicsSettings::Shading::DEFAULT;
22 graphics.shadows.bias = 0.05f;
23 graphics.shadows.pcss_search = GraphicsSettings::Filter::_16;
24 graphics.shadows.pcf = GraphicsSettings::Filter::_32;
25 graphics.shadows.extent = 512;
26 graphics.multisampling = 1;
27 graphics.is_vSync_enabled = true;
28 graphics.wait_for_gpu_frame_draw = true;
29 graphics.wait_for_monitor = true;
30 graphics.is_debug_normals_visible = false;
31 graphics.is_debug_depth_cube_visible = false;
32
33 graphics.debug_overlay.is_enabled = true;
34 graphics.debug_overlay.is_visible = false;
35 graphics.debug_overlay.toggle_key = Key::Board::f1;
36 graphics.debug_overlay.font_path = "fonts//IBMPlexMono.ttf";
37 }
38 [[nodiscard]] constexpr static Settings get_default() {
39 jen::Settings s = {};
40 s.set_default();
41 return s;
42 }
43 };
Hints:
Before first commit, do not forget to setup your git environment:
git config --global user.name "your_name_here"
git config --global user.email "your@email_here"

Clone this repository using HTTP(S):
git clone https://rocketgit.com/user/Jackalope/jen

Clone this repository using ssh (do not forget to upload a key first):
git clone ssh://rocketgit@ssh.rocketgit.com/user/Jackalope/jen

Clone this repository using git:
git clone git://git.rocketgit.com/user/Jackalope/jen

You are allowed to anonymously push to this repository.
This means that your pushed commits will automatically be transformed into a merge request:
... clone the repository ...
... make some changes and some commits ...
git push origin main