mirror of
https://github.com/jambonz/jambonz-infrastructure.git
synced 2026-01-25 02:08:27 +00:00
add prefix
This commit is contained in:
@@ -1,51 +1,78 @@
|
||||
# terraform for a jambonz development / test system
|
||||
|
||||
This terraform configuration generates a jambonz deployment consisting of one SBC server and an autoscale group of feature servers (initially containing a single feature server), and an SNS notification topic to signal lifecycle events when a feature server is terminating due to a scale-in operation. Feel free to edit the autoscale group parameters in the AWS console or cli once it has been created.
|
||||
This terraform configuration generates a jambonz deployment suitable for testing or trial deployments.
|
||||
|
||||
The SBC server is assigned an elastic IP.
|
||||
The deployment generated consists of:
|
||||
|
||||
### databases
|
||||
- a VPC along with public subnets, internet gateway, routing table and the like.
|
||||
- 2 EC2 instances: 1 SBC and 1 feature server (the SBC is assigned an elastic IP)
|
||||
- an autoscale group containing the feature server
|
||||
- an SNS topic that is used to notify autoscale lifecycle events to the feature server
|
||||
- an aurora mysql serverless database
|
||||
- an elasticache redis server
|
||||
|
||||
It also creates an Elasticache redis instance and an Aurora serverless mysql database, along with the necessary security groups.
|
||||
## Running the script
|
||||
|
||||
## Before running the terraform script
|
||||
### Prerequisites
|
||||
|
||||
There are several changes you will need to make before running the script.
|
||||
Before running the terraform script you will need to have done the following:
|
||||
|
||||
1. This script creates a VPC in the us-west-2 region. You may prefer to run in a different region: if so, edit the variables.tf file accordingly.
|
||||
1. Provisioned accounts on AWS and GCP
|
||||
1. (AWS) Created a keypair in your target region and downloaded the .pem file to your local machine.
|
||||
1. (AWS) Generated an access key id and secret access key
|
||||
1. (GCP) Created a project, enabled both text to speech and speech to text APIs, created a service account for the project and downloaded the json key file for the service account.
|
||||
1. Copied the the GCP json key file into the `credentials\` folder in your local copy of this project.
|
||||
1. Reviewed and edited the [variables.tf](./variables.tf) file in your local copy of this project to specify your desired AWS region, availability zones, EC2 instance type, and your AWS keys (alternatively, these can be specified on the command line when running the script).
|
||||
|
||||
2. If you _do_ want to run in a different region, you need to make sure the 3 AMIs that the terraform script deploys are available in your preferred region. That means either you run the packer scripts yourself and create the AMIs, or you contact me and ask me to copy the AMIs into your preferred region. If you create the AMIs yourself you will need to change the "owner" attribute in the ami filter in jambonz.tf to your own aws id.
|
||||
### It's go time!
|
||||
|
||||
3. You will need to download a json service key file from google cloud in order to use the speech services. Copy that file into the credentials folder in this project with the name gcp.json before you run terraform, since that is where the terraform script expects to find it and what it expects to be named.
|
||||
After [installing terraform](https://learn.hashicorp.com/terraform/getting-started/install.html) install the dependencies on your local machine:
|
||||
|
||||
4. Also create an AWS access key id and secret access key in order to use AWS polly. Either provide these in the variables.tf file or override on the command line.
|
||||
|
||||
In general, feel free to customize the terraform scripts to your needs. They are documented and fairly self-explanatory.
|
||||
|
||||
## Running the terraform script
|
||||
|
||||
Please review and edit the [variables.tf](./variables.tf) file as appropriate, given the suggestions above.
|
||||
|
||||
Then install the depedencies:
|
||||
```
|
||||
terraform init
|
||||
```
|
||||
|
||||
If you've made changes to the script, test it out:
|
||||
Testing out the script is a good idea, particularly if you've made changes:
|
||||
```
|
||||
terraform plan
|
||||
```
|
||||
|
||||
When you are ready to run it, do terraform apply, optionally passing any command-line arguments that you want to override variables.tf:
|
||||
When you are ready to run it, do terraform apply, optionally passing any command-line arguments that you want to override variables.tf.
|
||||
|
||||
So, if you edit variables.tf to put in your selections:
|
||||
|
||||
```
|
||||
terraform apply -var='key_name=aws-dhorton-key' \
|
||||
-var 'aws_access_key_id_runtime=KASYJH6IPHQPOLMVLWID' \
|
||||
-var 'aws_secret_access_key_runtime=WkjfaufgzHSHDYKQ+/+1tMPO4/DM9ADWO+asdfasdf'
|
||||
terraform apply
|
||||
```
|
||||
|
||||
(Note: those are not valid keys of course, just for explanatory purposes).
|
||||
or, if you prefer command-line arguments, you might do:
|
||||
|
||||
If you want to destroy the resources created, then:
|
||||
```
|
||||
terraform destroy
|
||||
```
|
||||
terraform apply
|
||||
-var='region=eu-west-3' \
|
||||
-var='public_subnets={"eu-west-3a": "172.31.32.0/24", "eu-west-3b": "172.31.33.0/24"}' \
|
||||
-var='aws_access_key_id_runtime=XXXX' \
|
||||
-var='aws_secret_access_key_runtime=YYYYY' \
|
||||
-var='ssh_key_path=~/aws/aws-dhorton-paris.pem' \
|
||||
-var='key_name=aws-dhorton-paris'
|
||||
```
|
||||
|
||||
Enter 'yes' at the confirmation prompt, and very shortly you will be the proud owner of a brand-new jambonz cluster!
|
||||
|
||||
(Note: to destroy the cluster, simply run `terraform destroy`).
|
||||
|
||||
### Next steps
|
||||
|
||||
After creating the cluster, get the elastic IP of your SBC server and navigate to:
|
||||
|
||||
```
|
||||
http://your-elastic-ip:3001
|
||||
```
|
||||
in your web browser to bring up the provisioning GUI.
|
||||
|
||||
Log in with admin/admin.
|
||||
|
||||
You'll be asked to change the password and then walked through a simple configuration of your account, applications, sip trunks and phone numbers.
|
||||
|
||||
Once done, you are ready to test!
|
||||
|
||||
|
||||
@@ -1,3 +1,7 @@
|
||||
variable "prefix" {
|
||||
description = "name of VPC and other identifiers - lower case letters only"
|
||||
default = "jambonz"
|
||||
}
|
||||
variable "region" {
|
||||
description = "the aws region in which to create the VPC"
|
||||
default = "us-west-2"
|
||||
@@ -21,10 +25,6 @@ variable "ec2_instance_type" {
|
||||
description = "the EC2 instance type to use for the jambonz server"
|
||||
default = "t2.medium"
|
||||
}
|
||||
variable "prefix" {
|
||||
description = "name of VPC and other identifiers - lower case letters only"
|
||||
default = "jambonz"
|
||||
}
|
||||
variable "key_name" {
|
||||
description = "name of an aws keypair that you have downloaded and wish to use to access the jambonz instance via ssh"
|
||||
default = "your-key-here"
|
||||
|
||||
@@ -1,59 +1,78 @@
|
||||
# terraform for a jambonz production system
|
||||
|
||||
This terraform configuration generates a jambonz deployment suitable for production. It creates a VPC with two public subnets in two availability zones for redundancy. Each subnet contains:
|
||||
This terraform configuration generates a jambonz deployment suitable for testing or trial deployments.
|
||||
|
||||
- an SBC SIP server
|
||||
- an SBC media server
|
||||
The deployment generated consists of:
|
||||
|
||||
Each of these instances is assigned an elastic IP.
|
||||
- a VPC along with public subnets, internet gateway, routing table and the like.
|
||||
- 5 EC2 instances: 2 SBC SIP servers, 2 SBC Media server, and 1 feature server (the SBCs are assigned elastic IPs)
|
||||
- an autoscale group containing the feature server
|
||||
- an SNS topic that is used to notify autoscale lifecycle events to the feature server
|
||||
- an aurora mysql serverless database
|
||||
- an elasticache redis server
|
||||
|
||||
It also creates an autoscale group of feature servers (initially with a single feature server), and an SNS notification topic to signal lifecycle events when a feature server is terminating due to a scale-in operation. Feel free to edit the autoscale group parameters in the AWS console or cli once it has been created.
|
||||
## Running the script
|
||||
|
||||
### databases
|
||||
### Prerequisites
|
||||
|
||||
It also creates an Elasticache redis instance and an Aurora serverless mysql database, along with the necessary security groups.
|
||||
Before running the terraform script you will need to have done the following:
|
||||
|
||||
## Before running the terraform script
|
||||
1. Provisioned accounts on AWS and GCP
|
||||
1. (AWS) Created a keypair in your target region and downloaded the .pem file to your local machine.
|
||||
1. (AWS) Generated an access key id and secret access key
|
||||
1. (GCP) Created a project, enabled both text to speech and speech to text APIs, created a service account for the project and downloaded the json key file for the service account.
|
||||
1. Copied the the GCP json key file into the `credentials\` folder in your local copy of this project.
|
||||
1. Reviewed and edited the [variables.tf](./variables.tf) file in your local copy of this project to specify your desired AWS region, availability zones, EC2 instance type, and your AWS keys (alternatively, these can be specified on the command line when running the script).
|
||||
|
||||
There are several changes you will need to make before running the script.
|
||||
### It's go time!
|
||||
|
||||
After [installing terraform](https://learn.hashicorp.com/terraform/getting-started/install.html) install the dependencies on your local machine:
|
||||
|
||||
1. This script creates a VPC in the us-west-2 region. You may prefer to run in a different region: if so, edit the variables.tf file accordingly.
|
||||
|
||||
2. If you _do_ want to run in a different region, you need to make sure the 3 AMIs that the terraform script deploys are available in your preferred region. That means either you run the packer scripts yourself and create the AMIs, or you contact me and ask me to copy the AMIs into your preferred region. If you create the AMIs yourself you will need to change the "owner" attribute in the ami filter in jambonz.tf to your own aws id.
|
||||
|
||||
3. Note that the default instance size in variables.tf is a tiny t2.micro. This is not suitable for production, but is configured that way since you may want to play with this script a bit before deploying "for real" and this will keep your costs down. When you are ready to create your production VPC update variables.tf to choose a production quality instance type (a c5n.xlarge is a good choice).
|
||||
|
||||
4. You will need to download a json service key file from google cloud in order to use the speech services. Copy that file into the credentials folder in this project with the name gcp.json before you run terraform, since that is where the terraform script expects to find it and what it expects to be named.
|
||||
|
||||
5. Also create an AWS access key id and secret access key that will be deployed on the feature server. This is needed both to enable SNS notification as well AWS polly for TTS. Either provide these in the variables.tf file or override on the command line.
|
||||
|
||||
In general, feel free to customize the terraform scripts to your needs. They are documented and fairly self-explanatory.
|
||||
|
||||
## Running the terraform script
|
||||
|
||||
Please review and edit the [variables.tf](./variables.tf) file as appropriate, given the suggestions above.
|
||||
|
||||
Then install the depedencies:
|
||||
```
|
||||
terraform init
|
||||
```
|
||||
|
||||
If you've made changes to the script, test it out:
|
||||
Testing out the script is a good idea, particularly if you've made changes:
|
||||
```
|
||||
terraform plan
|
||||
```
|
||||
|
||||
When you are ready to run it, do terraform apply, optionally passing any command-line arguments that you want to override variables.tf:
|
||||
When you are ready to run it, do terraform apply, optionally passing any command-line arguments that you want to override variables.tf.
|
||||
|
||||
So, if you edit variables.tf to put in your selections:
|
||||
|
||||
```
|
||||
terraform apply -var='key_name=aws-dhorton-key' \
|
||||
-var 'aws_access_key_id_runtime=KASYJH6IPHQPOLMVLWID' \
|
||||
-var 'aws_secret_access_key_runtime=WkjfaufgzHSHDYKQ+/+1tMPO4/DM9ADWO+asdfasdf'
|
||||
terraform apply
|
||||
```
|
||||
|
||||
(Note: those are not valid keys of course, just for explanatory purposes).
|
||||
or, if you prefer command-line arguments, you might do:
|
||||
|
||||
If you want to destroy the resources created, then:
|
||||
```
|
||||
terraform destroy
|
||||
```
|
||||
terraform apply
|
||||
-var='region=eu-west-3' \
|
||||
-var='public_subnets={"eu-west-3a": "172.31.32.0/24", "eu-west-3b": "172.31.33.0/24"}' \
|
||||
-var='aws_access_key_id_runtime=XXXX' \
|
||||
-var='aws_secret_access_key_runtime=YYYYY' \
|
||||
-var='ssh_key_path=~/aws/aws-dhorton-paris.pem' \
|
||||
-var='key_name=aws-dhorton-paris'
|
||||
```
|
||||
|
||||
Enter 'yes' at the confirmation prompt, and very shortly you will be the proud owner of a brand-new jambonz cluster!
|
||||
|
||||
(Note: to destroy the cluster, simply run `terraform destroy`).
|
||||
|
||||
### Next steps
|
||||
|
||||
After creating the cluster, get the elastic IP of one of your SBC SIP servers (any one, does not matter which) and navigate to:
|
||||
|
||||
```
|
||||
http://your-elastic-ip:3001
|
||||
```
|
||||
in your web browser to bring up the provisioning GUI.
|
||||
|
||||
Log in with admin/admin.
|
||||
|
||||
You'll be asked to change the password and then walked through a simple configuration of your account, applications, sip trunks and phone numbers.
|
||||
|
||||
Once done, you are ready to test!
|
||||
|
||||
|
||||
@@ -1,12 +1,12 @@
|
||||
# create a subnet group for aurora mysql
|
||||
resource "aws_db_subnet_group" "jambonz" {
|
||||
name = "jambonz-mysql-subnet"
|
||||
name = "${var.prefix}-mysql-subnet"
|
||||
subnet_ids = local.my_subnet_ids
|
||||
}
|
||||
|
||||
# create aurora database
|
||||
resource "aws_rds_cluster" "jambonz" {
|
||||
cluster_identifier = "aurora-cluster-jambonz"
|
||||
cluster_identifier = "aurora-cluster-${var.prefix}"
|
||||
engine = "aurora"
|
||||
engine_version = "5.6.10a"
|
||||
engine_mode = "serverless"
|
||||
@@ -36,7 +36,7 @@ resource "aws_elasticache_subnet_group" "jambonz" {
|
||||
resource "aws_elasticache_cluster" "jambonz" {
|
||||
cluster_id = "jambonz"
|
||||
engine = "redis"
|
||||
node_type = "cache.t2.micro"
|
||||
node_type = "cache.t2.medium"
|
||||
num_cache_nodes = 1
|
||||
subnet_group_name = "jambonz-cache-subnet"
|
||||
security_group_ids = [aws_security_group.allow_redis.id]
|
||||
|
||||
@@ -1,23 +1,10 @@
|
||||
# create an SNS notification topic
|
||||
resource "aws_sns_topic" "jambonz-sns-topic" {
|
||||
name = var.sns_topic
|
||||
name = "${var.prefix}-fs-lifecycle-events"
|
||||
}
|
||||
|
||||
# create an IAM role that allows publishing to the SNS topic
|
||||
#data "aws_iam_policy_document" "jambonz_sns_publish" {
|
||||
# statement {
|
||||
# actions = [
|
||||
# "sns:Publish"
|
||||
# ]
|
||||
|
||||
# resources = [
|
||||
# aws_sns_topic.jambonz-sns-topic.arn
|
||||
# ]
|
||||
# }
|
||||
#}
|
||||
|
||||
resource "aws_iam_role" "jambonz_sns_publish" {
|
||||
name = "jambonz_sns_publish"
|
||||
name = "${var.prefix}_sns_publish"
|
||||
|
||||
assume_role_policy = <<EOF
|
||||
{
|
||||
@@ -94,7 +81,7 @@ resource "aws_launch_configuration" "jambonz-feature-server" {
|
||||
|
||||
# create a placement group to spread feature server instances
|
||||
resource "aws_placement_group" "jambonz-feature-server" {
|
||||
name = "jambonz-feature-server"
|
||||
name = "${var.prefix}-feature-server"
|
||||
strategy = "spread"
|
||||
}
|
||||
|
||||
@@ -111,7 +98,7 @@ resource "aws_autoscaling_group" "jambonz-feature-server" {
|
||||
|
||||
tag {
|
||||
key = "Name"
|
||||
value = "jambonz-feature-server"
|
||||
value = "${var.prefix}-feature-server"
|
||||
propagate_at_launch = true
|
||||
}
|
||||
|
||||
|
||||
@@ -35,7 +35,7 @@ resource "aws_instance" "jambonz-sbc-sip-server" {
|
||||
depends_on = [aws_internet_gateway.jambonz, aws_elasticache_cluster.jambonz, aws_rds_cluster.jambonz]
|
||||
|
||||
tags = {
|
||||
Name = "jambonz-sbc-sip-server"
|
||||
Name = "${var.prefix}-sbc-sip-server"
|
||||
}
|
||||
}
|
||||
|
||||
@@ -65,7 +65,7 @@ resource "aws_instance" "jambonz-sbc-rtp-server" {
|
||||
depends_on = [aws_internet_gateway.jambonz]
|
||||
|
||||
tags = {
|
||||
Name = "jambonz-sbc-rtp-server"
|
||||
Name = "${var.prefix}-sbc-rtp-server"
|
||||
}
|
||||
}
|
||||
|
||||
@@ -77,6 +77,7 @@ resource "null_resource" "seed" {
|
||||
connection {
|
||||
type = "ssh"
|
||||
user = "admin"
|
||||
private_key = file("${var.ssh_key_path}")
|
||||
host = element(aws_eip.jambonz-sbc-sip.*.public_ip, 0)
|
||||
}
|
||||
|
||||
|
||||
@@ -1,3 +1,7 @@
|
||||
variable "prefix" {
|
||||
description = "name of VPC and other identifiers - lower case letters only"
|
||||
default = "jambonz"
|
||||
}
|
||||
variable "region" {
|
||||
description = "the aws region in which to create the VPC"
|
||||
default = "us-west-2"
|
||||
@@ -22,13 +26,17 @@ variable "jambonz_sbc_rtp_private_ips" {
|
||||
default = ["172.31.32.20", "172.31.33.20"]
|
||||
}
|
||||
variable "ec2_instance_type" {
|
||||
description = "the EC2 instance type to use for the jambonz server"
|
||||
default = "t2.micro"
|
||||
description = "the EC2 instance type to use for the jambonz servers"
|
||||
default = "c5n.large"
|
||||
}
|
||||
variable "key_name" {
|
||||
description = "name of an aws keypair that you have downloaded and wish to use to access the jambonz instance via ssh"
|
||||
default = "your-key-here"
|
||||
}
|
||||
variable "ssh_key_path" {
|
||||
description = "path to your aws keypair on your local machine"
|
||||
default = "/path/to/key.pem"
|
||||
}
|
||||
variable "aws_access_key_id_runtime" {
|
||||
description = "AWS access key jambonz will use to access AWS Polly TTS"
|
||||
default = "your-aws-access-key-id"
|
||||
|
||||
Reference in New Issue
Block a user