Skip to content

Developer Guide

This guide describes how to build the LMA project from source code, run local development, and contribute.

You need the following installed on your machine:

DependencyVersion
bashLinux, macOS, or Windows WSL
Node.jsv18, v20, or v22
npmBundled with Node.js
DockerRunning (required for SAM builds). On macOS, use Docker Desktop.
zipAny version
Python 3With pip3
virtualenvpip3 install virtualenv
AWS CLIConfigured with credentials
AWS SAM CLI>= 1.118.0
lma-main.yaml # Main CloudFormation orchestration template
publish.sh # Build and publish script
VERSION # Current version (0.3.0)
lma-ai-stack/ # Core: Lambdas, AppSync, DynamoDB, UI
├── deployment/ # CloudFormation templates
├── source/
│ ├── lambda_functions/ # 19 Python Lambda functions
│ ├── lambda_layers/ # Shared Python/Node layers
│ ├── appsync/ # GraphQL schema and 39 resolvers
│ └── ui/ # React web application
├── Makefile # Build orchestration
└── config.mk # Build configuration
lma-websocket-transcriber-stack/ # WebSocket server (TypeScript/Fastify on Fargate)
lma-virtual-participant-stack/ # VP (TypeScript/Puppeteer on ECS)
lma-vpc-stack/ # VPC networking
lma-cognito-stack/ # Cognito auth
lma-meetingassist-setup-stack/ # Strands agent config
lma-bedrockkb-stack/ # Bedrock Knowledge Base
lma-llm-template-setup-stack/ # LLM prompt templates
lma-chat-button-config-stack/ # Chat button config
lma-nova-sonic-config-stack/ # Nova Sonic config
docs/ # Documentation (you are here)

Set up your development environment (installs Node.js, Python venv, SDK, and CLI):

Terminal window
make setup
Terminal window
lma-cli check-prereqs

The simplest way to build from source and deploy:

Terminal window
lma-cli deploy --stack-name LMA --from-code . --admin-email user@example.com --wait

This builds all stacks, publishes artifacts to S3, and deploys the CloudFormation stack in one command. Use --wait to monitor progress with real-time event streaming.

Terminal window
lma-cli publish --source-dir . --region us-east-1

This packages all sub-stacks, uploads to S3, and outputs a CloudFormation template URL you can use later.

Terminal window
lma-cli deploy --stack-name LMA --template-url <template-url> --admin-email user@example.com --wait

See the LMA CLI Reference for the full list of options.

Both lma-cli publish and lma-cli deploy --from-code use content-hash-based checksums to skip rebuilding unchanged stacks on subsequent runs.

Publishing and deploying from source works on both Linux and macOS (including Apple Silicon). On macOS:

  • Docker Desktop must be installed and running. Docker Desktop handles x86_64 emulation via Rosetta — no additional QEMU setup is needed.
  • Enable Rosetta emulation: Open Docker Desktop → Settings → General → Enable “Use Rosetta for x86_64/amd64 emulation on Apple Silicon”, then restart Docker Desktop.
  • SAM CLI container preference: If SAM CLI is configured to use Finch (via /Library/Preferences/com.amazon.samcli.plist), but Finch is not installed, builds will fail. Fix with:
    Terminal window
    sudo plutil -replace DefaultContainerRuntime -string docker /Library/Preferences/com.amazon.samcli.plist
    Or remove the preference entirely to let SAM CLI auto-detect Docker: sudo rm /Library/Preferences/com.amazon.samcli.plist

The React UI is in lma-ai-stack/source/ui/. The simplest way to start the UI dev server is:

Terminal window
make ui-start STACK_NAME=<your-stack-name>

This automatically retrieves the .env configuration from your deployed stack’s CloudFormation outputs, installs dependencies, and starts the development server at http://localhost:3000. The page reloads on edits.

Other npm scripts (run from lma-ai-stack/source/ui/):

  • npm test — Run Jest tests in watch mode
  • npm run build — Production build to build/

The WebSocket transcription server is in lma-websocket-transcriber-stack/source/app/.

Terminal window
cd lma-websocket-transcriber-stack/source/app
npm install
npm run build # TypeScript compilation
npm test # Jest tests

The server is a TypeScript/Fastify application deployed as a Docker container on ECS Fargate behind an Application Load Balancer.

The VP backend is in lma-virtual-participant-stack/backend/.

Terminal window
cd lma-virtual-participant-stack/backend
npm install
npm run build # TypeScript compilation

Build and run the VP container locally:

Terminal window
cd lma-virtual-participant-stack
docker build -t lma-vp .

Run with required environment variables:

Terminal window
docker run \
--env MEETING_ID=123456789 \
--env MEETING_PASSWORD=abc123 \
--env MEETING_NAME=TestMeeting \
--env AWS_DEFAULT_REGION=us-east-1 \
--env KINESIS_STREAM_NAME=<CallDataStreamName> \
--env SHOULD_RECORD_CALL=true \
--env RECORDINGS_BUCKET_NAME=<RecordingsS3Bucket> \
--env RECORDINGS_KEY_PREFIX=lca-audio-recordings/ \
--env MEETING_PLATFORM=Zoom \
--env USER_NAME=TestUser \
lma-vp

Execute the VP scheduler Step Function directly:

Terminal window
aws stepfunctions start-execution \
--state-machine-arn arn:aws:states:us-east-1:123456789012:stateMachine:SchedulerStateMachine-XXXX \
--input '{"apiInfo":{"httpMethod":"POST"},"data":{"meetingPlatform":"Zoom","meetingID":"12345678","meetingPassword":"a1b2c3","meetingName":"Test","meetingTime":"","userName":"Bob"}}'

Supported httpMethod values: POST (join/schedule), GET (list scheduled), DELETE (cancel scheduled).

A test utility in utilities/websocket-client/ streams WAV file audio to the WebSocket server:

Terminal window
cd utilities/websocket-client
npm run setup # Install dependencies (first time)
npm run build # Build TypeScript

Configure environment variables (export or .env file):

SAMPLE_RATE=8000
BYTES_PER_SAMPLE=2
CHUNK_SIZE_IN_MS=200
CALL_FROM_NUMBER='LCA-Client'
CALL_TO_NUMBER='+8001112222'
AGENT_ID='TestAgent'
LMA_ACCESS_JWT_TOKEN=<access_token>
LMA_ID_JWT_TOKEN=<id_token>
LMA_REFRESH_JWT_TOKEN=<refresh_token>

Get the JWT tokens from an authenticated LMA user session. Then run:

Terminal window
npm run start -- --uri <WebSocket_Server_Endpoint> --wavfile <file.wav>

The WebSocket server endpoint is in the CloudFormation stack Outputs.

From lma-ai-stack/, the Makefile provides linting targets (requires CONFIG_ENV environment variable):

TargetToolWhat it checks
make lint-cfn-lintcfn-lintCloudFormation templates
make lint-yamllintyamllintYAML syntax
make lint-pylintpylintPython code (100 char lines)
make lint-mypymypyPython type annotations
make lint-banditbanditPython security
make lint-validateSAM CLITemplate validation

Code style conventions:

  • Python: Black formatter, Flake8, Pylint. 100-character line limit. Config in .pylintrc, .flake8.
  • JavaScript/TypeScript: ESLint (airbnb-base) + Prettier. 120-character line limit, single quotes, trailing commas. Config in .eslintrc.json, .prettierrc.
What to customizeHowDocs
LLM summary promptsDynamoDB or admin UITranscript Summarization
Chat shortcut buttonsAdmin UIMeeting Assistant
MCP server integrationsAdmin UIMCP Servers
Knowledge Base documentsS3 bucket or web crawlingMeeting Assistant
Bedrock GuardrailsCloudFormation parameterMeeting Assistant
Transcript processingCustom Lambda functionLambda Hook Functions
Voice assistant promptsDynamoDB or admin UINova Sonic 2 Setup

See CONTRIBUTING.md for guidelines on reporting bugs, requesting features, and submitting pull requests.