Skip to main content
Glama
orneryd

M.I.M.I.R - Multi-agent Intelligent Memory & Insight Repository

by orneryd
ggml-vulkan.h952 B
#pragma once #include "ggml.h" #include "ggml-backend.h" #ifdef __cplusplus extern "C" { #endif #define GGML_VK_NAME "Vulkan" #define GGML_VK_MAX_DEVICES 16 // backend API GGML_BACKEND_API ggml_backend_t ggml_backend_vk_init(size_t dev_num); GGML_BACKEND_API bool ggml_backend_is_vk(ggml_backend_t backend); GGML_BACKEND_API int ggml_backend_vk_get_device_count(void); GGML_BACKEND_API void ggml_backend_vk_get_device_description(int device, char * description, size_t description_size); GGML_BACKEND_API void ggml_backend_vk_get_device_memory(int device, size_t * free, size_t * total); GGML_BACKEND_API ggml_backend_buffer_type_t ggml_backend_vk_buffer_type(size_t dev_num); // pinned host buffer for use with the CPU backend for faster copies between CPU and GPU GGML_BACKEND_API ggml_backend_buffer_type_t ggml_backend_vk_host_buffer_type(void); GGML_BACKEND_API ggml_backend_reg_t ggml_backend_vk_reg(void); #ifdef __cplusplus } #endif

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/orneryd/Mimir'

If you have feedback or need assistance with the MCP directory API, please join our Discord server