Documentation Index Fetch the complete documentation index at: https://docs.raily.ai/llms.txt
Use this file to discover all available pages before exploring further.
Overview
Raily Vector Store is a fully managed vector database solution with built-in access control, analytics, and semantic search capabilities. No external vector database setup required.
Features
Instant Setup Start storing and searching vectors immediately with zero configuration.
Built-in Access Control Every vector query respects your access policies automatically.
High Performance Optimized for fast similarity search with sub-100ms query times.
Auto-scaling Automatically scales to handle millions of vectors.
Indexing Content
import Raily from '@raily/sdk' ;
import OpenAI from 'openai' ;
const raily = new Raily ({
apiKey: process . env . RAILY_API_KEY
});
const openai = new OpenAI ();
// Index content with embeddings
async function indexContent ( contentId , text ) {
// Generate embedding
const embedding = await openai . embeddings . create ({
model: "text-embedding-ada-002" ,
input: text
});
// Store in Raily Vector Store
await raily . vectorStore . index ({
contentId: contentId ,
vector: embedding . data [ 0 ]. embedding ,
metadata: {
text: text ,
indexed_at: new Date (). toISOString ()
}
});
}
Semantic Search
// Search with automatic access control
async function semanticSearch ( query , requesterId , options = {}) {
// Generate query embedding
const queryEmbedding = await openai . embeddings . create ({
model: "text-embedding-ada-002" ,
input: query
});
// Search with built-in access control
const results = await raily . vectorStore . search ({
vector: queryEmbedding . data [ 0 ]. embedding ,
requesterId: requesterId ,
limit: options . limit || 5 ,
filter: options . filter ,
context: {
purpose: "semantic_search" ,
query: query
}
});
return results . map ( result => ({
contentId: result . contentId ,
text: result . metadata . text ,
score: result . score ,
allowed: result . access . allowed
}));
}
RAG Implementation
async function answerQuestion ( question , requesterId ) {
// Step 1: Find relevant content
const searchResults = await semanticSearch ( question , requesterId , {
limit: 3
});
// Filter for allowed results
const allowedDocs = searchResults . filter ( r => r . allowed );
if ( allowedDocs . length === 0 ) {
return {
answer: "I don't have access to information to answer this question." ,
sources: []
};
}
// Step 2: Build context
const context = allowedDocs
. map (( doc , i ) => `[ ${ i + 1 } ] ${ doc . text } ` )
. join ( ' \n\n ' );
// Step 3: Generate answer
const completion = await openai . chat . completions . create ({
model: "gpt-4" ,
messages: [
{
role: "system" ,
content: `Answer based on this context: \n\n ${ context } `
},
{
role: "user" ,
content: question
}
]
});
return {
answer: completion . choices [ 0 ]. message . content ,
sources: allowedDocs . map ( doc => ({
contentId: doc . contentId ,
score: doc . score
}))
};
}
Batch Indexing
// Index multiple documents efficiently
async function batchIndexContent ( documents ) {
// Generate embeddings
const texts = documents . map ( doc => doc . text );
const embeddings = await openai . embeddings . create ({
model: "text-embedding-ada-002" ,
input: texts
});
// Batch index to Raily
const items = documents . map (( doc , i ) => ({
contentId: doc . id ,
vector: embeddings . data [ i ]. embedding ,
metadata: {
text: doc . text ,
... doc . metadata
}
}));
await raily . vectorStore . batchIndex ({
items: items
});
console . log ( `Indexed ${ items . length } documents` );
}
Filtering
// Search with metadata filters
const results = await raily . vectorStore . search ({
vector: queryEmbedding . data [ 0 ]. embedding ,
requesterId: "app_id" ,
limit: 10 ,
filter: {
// Only search in specific categories
"metadata.category" : { $in: [ "research" , "technical" ] },
// Published in the last year
"metadata.published_date" : { $gte: "2024-01-01" },
// Exclude drafts
"metadata.status" : { $ne: "draft" }
}
});
Updating Vectors
// Update existing vector
async function updateVector ( contentId , newText ) {
// Generate new embedding
const embedding = await openai . embeddings . create ({
model: "text-embedding-ada-002" ,
input: newText
});
// Update in Raily Vector Store
await raily . vectorStore . update ({
contentId: contentId ,
vector: embedding . data [ 0 ]. embedding ,
metadata: {
text: newText ,
updated_at: new Date (). toISOString ()
}
});
}
Deleting Vectors
// Delete vector from store
await raily . vectorStore . delete ({
contentId: "cnt_article_123"
});
// Batch delete
await raily . vectorStore . batchDelete ({
contentIds: [ "cnt_1" , "cnt_2" , "cnt_3" ]
});
Analytics
// Get vector store usage analytics
const analytics = await raily . vectorStore . analytics ({
period: "last_30_days" ,
metrics: [ "searches" , "indexed_items" , "avg_latency" ]
});
console . log ( `Total searches: ${ analytics . searches } ` );
console . log ( `Total vectors: ${ analytics . indexed_items } ` );
console . log ( `Avg latency: ${ analytics . avg_latency } ms` );
Batch Operations Use batch indexing and batch search for better throughput.
Result Caching Cache frequent queries to reduce latency and costs.
Metadata Indexing Index frequently filtered metadata fields for faster filtered searches.
Smart Limits Request only the number of results you need to minimize processing time.
Pricing
Plan Vectors Searches/month Storage Price Free 10,000 1,000 1 GB $0 Pro 1M 100,000 100 GB $49/mo Enterprise Unlimited Unlimited Unlimited Custom
Next Steps
Qdrant Self-host with Qdrant
LLM Providers Integrate with AI providers