vertex-chat-streamlit).us-central1 in this course. Keep your services in the same region for fewer surprises.vertex-chatbot-sa.C:\VertexChatbot\keys\sa.json/Users/you/VertexChatbot/keys/sa.jsongcloud init gcloud config set project YOUR_PROJECT_ID gcloud config set compute/region us-central1 gcloud auth application-default login
gcloud auth application-default login sets up Application Default Credentials (ADC) on your machine—perfect for PyCharm & Streamlit during development.
set GOOGLE_APPLICATION_CREDENTIALS=C:\VertexChatbot\keys\sa.json (Windows)export GOOGLE_APPLICATION_CREDENTIALS=/Users/you/VertexChatbot/keys/sa.json (macOS/Linux)
gcloud auth list gcloud config list gcloud services list --enabled | findstr vertex # Windows gcloud services list --enabled | grep vertex # macOS/Linux
Install the library in your virtual env:
pip install --upgrade google-cloud-aiplatform streamlit python-dotenv
Create test_vertex.py with:
# test_vertex.py
# Quick smoke test for Vertex AI (Gemini) from your laptop/PyCharm.
import vertexai
from vertexai.generative_models import GenerativeModel
PROJECT_ID = "YOUR_PROJECT_ID"
LOCATION = "us-central1"
vertexai.init(project=PROJECT_ID, location=LOCATION)
model = GenerativeModel("gemini-1.5-pro") # prebuilt LLM, no training needed
resp = model.generate_content("Say hello in five words.")
print("✅ Call OK. Model replied:")
print(resp.text)
✅ Call OK. Model replied: Hello there, nice to meet!
gcloud auth application-default login and select the correct account.gcloud config list..env for PyCharmCreate a .env file at your project root (don’t commit to public repos):
GCP_PROJECT_ID=YOUR_PROJECT_ID GCP_LOCATION=us-central1 # For key-based auth (optional; prefer ADC in dev) # GOOGLE_APPLICATION_CREDENTIALS=C:/VertexChatbot/keys/sa.json
Install python-dotenv (already in the earlier pip line) and load in code:
from dotenv import load_dotenv
import os, vertexai
from vertexai.generative_models import GenerativeModel
load_dotenv()
vertexai.init(
project=os.getenv("GCP_PROJECT_ID"),
location=os.getenv("GCP_LOCATION","us-central1")
)
model = GenerativeModel("gemini-1.5-pro")
print(model.generate_content("Environment variables loaded successfully.").text)
In Step 2 we’ll create the **Streamlit app shell** (UI), wire it to Vertex AI, and run locally. We will get a chat input box and a response area working end-to-end.