Conversation
07e92e4 to
03790ba
Compare
There was a problem hiding this comment.
Pull request overview
This pull request adds a Jupyter notebook example demonstrating semantic memory REST API functionality, specifically focusing on set_metadata scoping. The PR includes the necessary dependencies (jupyter and nbconvert), a new CI workflow job to validate the notebook, and a configuration file for the CI test environment.
Changes:
- Adds a new Jupyter notebook (
rest_api_set_metadata.ipynb) demonstrating semantic memory set_metadata scoping with the REST API - Adds jupyter and nbconvert as development dependencies to support notebook examples
- Adds a CI workflow job to automatically test the notebook execution against a live MemMachine server
- Adds a CI configuration file for running MemMachine with OpenAI and PostgreSQL/pgvector
Reviewed changes
Copilot reviewed 4 out of 5 changed files in this pull request and generated 3 comments.
Show a summary per file
| File | Description |
|---|---|
| examples/rest_api_set_metadata.ipynb | New notebook demonstrating semantic memory set_metadata scoping via REST API |
| pyproject.toml | Adds jupyter>=1.1.0 and nbconvert>=7.16.0 to dev dependencies |
| uv.lock | Lock file updates for new jupyter and nbconvert dependencies and their transitive dependencies |
| .github/workflows/pytest-integration.yml | Adds new CI job to execute the notebook against a live MemMachine server |
| .github/ci/memmachine-openai-pgvector.yml | Configuration file for CI testing with OpenAI and pgvector backend |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| host: localhost | ||
| port: 55432 | ||
| user: memmachine | ||
| password: memmachine | ||
| db_name: memmachine |
There was a problem hiding this comment.
The notebook uses hardcoded credentials (username: memmachine, password: memmachine) in plain text in the configuration file. While this is only for CI testing purposes, consider adding a comment in the config file clarifying that this is specifically for CI use and should never be used in production environments. This helps prevent accidental misuse if someone copies this configuration.
| - name: Wait for server health | ||
| env: | ||
| MEMORY_BACKEND_URL: http://127.0.0.1:8080 | ||
| run: | | ||
| uv run python - <<'PY' | ||
| import os | ||
| import time | ||
| import requests | ||
|
|
||
| url = os.environ["MEMORY_BACKEND_URL"] + "/api/v2/health" | ||
| for _ in range(60): | ||
| try: | ||
| r = requests.get(url, timeout=2) | ||
| if r.status_code == 200: | ||
| print("Server ready") | ||
| raise SystemExit(0) | ||
| except Exception: | ||
| pass | ||
| time.sleep(2) | ||
| raise SystemExit("Server did not become ready in time") | ||
| PY |
There was a problem hiding this comment.
Consider adding an assertion or verification step after the "Wait for server health" check to ensure the server is actually responding correctly before executing the notebook. While the health check succeeds when status_code is 200, it might be helpful to also verify the response content to ensure the server is fully functional, similar to how installation-test.yml performs both health and readiness checks.
| - name: Stop MemMachine server and database | ||
| if: always() | ||
| run: | | ||
| kill $(cat memmachine-server.pid) || true | ||
| docker rm -f memmachine-pgvector || true |
There was a problem hiding this comment.
Consider uploading server logs and the executed notebook as artifacts when the test fails. This would make debugging easier. Other workflows in this repository (like installation-test.yml) upload logs on failure with if: always() and uses: actions/upload-artifact@v4. The executed notebook output could also be valuable for debugging.
No description provided.