C:/VertexChatbot/ ├─ keys/ │ └─ sa.json ← your service account JSON ├─ app/ │ ├─ main.py ← will test the LLM call here │ └─ requirements.txt └─ .env ← optional: environment variables
# In PyCharm terminal (or cmd) python -m venv .venv # Activate: # Windows: .venv\Scripts\activate # macOS/Linux: source .venv/bin/activate pip install --upgrade pip pip install google-cloud-aiplatform flask python-dotenv
Create requirements.txt (so deployment is easy):
google-cloud-aiplatform flask python-dotenv
Create .env at project root and add:
GOOGLE_APPLICATION_CREDENTIALS=C:/VertexChatbot/keys/sa.json GCP_PROJECT_ID=your-project-id GCP_LOCATION=us-central1
PyCharm ▸ Run/Debug Configurations ▸ Environment ▸ Include system env + Add these vars if you prefer.
app/main.py:
import os
from dotenv import load_dotenv
from google.cloud import aiplatform
load_dotenv() # load .env
project_id = os.getenv("GCP_PROJECT_ID")
location = os.getenv("GCP_LOCATION", "us-central1")
# Initialize Vertex AI client
aiplatform.init(project=project_id, location=location)
# Use a pre-trained text model (no training needed)
model = aiplatform.TextGenerationModel.from_pretrained("text-bison") # or "text-bison@001"
prompt = "In one sentence, what is Vertex AI?"
response = model.predict(prompt, max_output_tokens=128)
print("Status: OK")
print("Response:", response.text)
Run in PyCharm (green ▶). Expected:
Status: OK Response: Vertex AI is Google's unified platform for building, deploying, and managing machine learning models.