mirror of
https://github.com/jambonz/next-static-site.git
synced 2025-12-19 04:47:44 +00:00
more copy changes (#15)
This commit is contained in:
@@ -93,13 +93,3 @@ navi:
|
||||
-
|
||||
path: nodejs-sdk
|
||||
title: Introduction to the Node.js SDK
|
||||
-
|
||||
path: open-source
|
||||
title: Open source
|
||||
pages:
|
||||
-
|
||||
path: aws
|
||||
title: Deploying on AWS
|
||||
-
|
||||
path: self-host
|
||||
title: Deploying on bare metal or other hosting providers
|
||||
|
||||
@@ -23,5 +23,8 @@ navi:
|
||||
title: How to install
|
||||
pages:
|
||||
-
|
||||
path: overview
|
||||
title: Overview
|
||||
path: aws
|
||||
title: Installing on AWS
|
||||
-
|
||||
path: self-hosted
|
||||
title: Installing elsewhere
|
||||
@@ -1,4 +1,4 @@
|
||||
# Building a self-hosted solution (not on AWS)
|
||||
# Building a self-hosted solution
|
||||
|
||||
If you are using your own hardware, or a hosting provider other than AWS, there is a little more elbow grease required. Follow the instructions below to create a jambonz deployment consisting of one SBC and one Feature Server. You will also be provisioning a mysql server and a redis server.
|
||||
|
||||
|
||||
@@ -8,6 +8,6 @@ To that end, this year we are sponsoring [TADHack Global 2021](https://tadhack.c
|
||||
|
||||
We have also sponsored other conferences in the past, including the wonderful [CommCon](https://2019.commcon.xyz/) conference in the UK (run by the awesome Dan Jenkins of [Nimble Ape](https://nimblea.pe/)), and have attended and spoken at other open-source conferences, including [SimCon](https://blog.simwood.com/2020/06/simcon4-and-something-new/).
|
||||
|
||||
We provided a small amount of funding this year to support the openSIPS security audit via their [gofundme](https://www.gofundme.com/f/opensips-security-audit-penetration-tests) initiative.
|
||||
We provided a small amount of funding this year to help support the [openSIPS](https://www.opensips.org/) security audit via their [gofundme](https://www.gofundme.com/f/opensips-security-audit-penetration-tests) initiative.
|
||||
|
||||
And, finally, last but not least, we tried to purchase enough yummy baked goods last year to help save the awesome [Bearkery Bakery](https://bearbakeshop.com/), which was run by one of the best people in the RTC community, [Fred Posner](https://qxork.com/). Ultimately, we were defeated by the pandemic, but we don't regret a moment of the effort (gurgle, burp).
|
||||
@@ -11,7 +11,7 @@ We release both jambonz and drachtio under the <a href="https://github.com/jambo
|
||||
<div id="why-mit"></div>
|
||||
|
||||
#### Why we chose the MIT License
|
||||
A few words might be in order on why we chose the MIT License, because we notice that quite often purveyors of "free as in beer" FOSS seem to be pictured by their user base as an austere sect of itinerant, karma-seeking monks who have taken a vow of poverty and wander the open source-scape doing saintly good deeds here and there - like providing free support, or adding any old feature that anyone thinks up, at any time, at no charge.
|
||||
A few words might be in order on why we chose the MIT License, because we notice that sometimes consumers of "free as in beer" FOSS seem to picture the software providers as some sort of austere sect of itinerant, karma-seeking monks who have taken a vow of poverty and wander the open source landscape doing saintly good deeds here and there - like providing free support, or adding any old feature that anyone thinks up, at any time, at no charge.
|
||||
|
||||
And sometimes, the fact that software is provided at no cost seems to result in a sense on the part of the consumer (not all mind you, but some) that it must accordingly have little or no value.
|
||||
|
||||
|
||||
@@ -9,12 +9,12 @@ Building an application like jambonz requires curating a selection of the best o
|
||||
| <a href="https://drachtio.org" target="_blank">drachtio</a> | application logic and call control | <a href="https://github.com/drachtio/drachtio-srf/blob/master/LICENSE" target="_blank">MIT</a> |
|
||||
| <a href="https://github.com/sipwise/rtpengine" target="_blank">rtpengine</a> | media proxy and transcoding | <a href="https://www.gnu.org/licenses/quick-guide-gplv3.html" target="_blank">GPL v3.0</a> |
|
||||
| <a href="https://github.com/signalwire/freeswitch" target="_blank">freeswitch</a> | media server | <a href="https://github.com/signalwire/freeswitch/blob/master/LICENSE" target="_blank">MPL v1.1</a> |
|
||||
| <a href ="https://github.com/drachtio/drachtio-freeswitch-modules" target="_blank">freeswitch plugins</a> | audio integrations w/ google, AWS, others| <a href="https://github.com/drachtio/drachtio-freeswitch-modules/blob/master/LICENSE" target="_blank">MIT</a> |
|
||||
| <a href="https://www.apiban.org/" target="_blank">apiban</a> | SBC protection from bad actors |<a href="https://www.gnu.org/licenses/old-licenses/gpl-2.0.html" target="_blank">GPL v2.0</a> |
|
||||
| <a href ="https://github.com/drachtio/drachtio-freeswitch-modules" target="_blank">freeswitch plugins</a> | audio integrations (google, AWS, others)| <a href="https://github.com/drachtio/drachtio-freeswitch-modules/blob/master/LICENSE" target="_blank">MIT</a> |
|
||||
| <a href="https://www.apiban.org/" target="_blank">apiban</a> | protection from bad SIP traffic |<a href="https://www.gnu.org/licenses/old-licenses/gpl-2.0.html" target="_blank">GPL v2.0</a> |
|
||||
| <a href="https://expressjs.com/" target="_blank">express</a> | HTTP middleware and web framework |<a href="https://github.com/expressjs/expressjs.com/blob/gh-pages/LICENSE.md" target="_blank">Creative Commons v3.0</a> |
|
||||
| <a href="https://nodejs.org/" target="_blank">Node.js</a> | Javascript runtime |<a href="https://github.com/nodejs/node/blob/master/LICENSE" target="_blank">MIT</a> |
|
||||
| <a href="https://libwebsockets.org/" target="_blank">libwebsockets</a> | websockets library |<a href="https://github.com/warmcat/libwebsockets/blob/main/LICENSE" target="_blank">MIT</a> |
|
||||
| <a href="https://www.mysql.com/" target="_blank">mysql</a> | susbcriber database |<a href=" http://oss.oracle.com/licenses/universal-foss-exception" target="_blank">GPL v2.0 with FOSS Exception</a> |
|
||||
| <a href="https://www.mysql.com/" target="_blank">mysql</a> | provisioning database |<a href=" http://oss.oracle.com/licenses/universal-foss-exception" target="_blank">GPL v2.0 with FOSS Exception</a> |
|
||||
| <a href="https://github.com/influxdata/telegraf" target="_blank">Telegraf</a> | metrics agent | <a href="https://github.com/influxdata/telegraf/blob/master/LICENSE" target="_blank">MIT</a> |
|
||||
| <a href="https://github.com/influxdata/influxdb" target="_blank">Influxdb</a> | time series database | <a href="https://github.com/influxdata/influxdb/blob/master/LICENSE" target="_blank">MIT</a> |
|
||||
| <a href="https://redis.io/" target="_blank">Redis</a> | key-value store | <a href="https://redis.io/topics/license" target="_blank">3-clause BSD</a> |
|
||||
@@ -27,5 +27,5 @@ Building an application like jambonz requires curating a selection of the best o
|
||||
<ol class="mxs">
|
||||
<li>When using Homer (or Grafana) with the AGPL v3 license, any changes that you make to the jambonz source code <strong>are not</strong> considered a "covered work" by that license, as the two programs are not linked.<br/>TLDR: any changes you make to jambonz source code remain under the more permissive MIT license</li>
|
||||
<li><a href="https://qxip.net/" target="_blank">QXIP</a>, the creator of Homer and the <a href="https://github.com/sipcapture/HEP" target="_blank">HEP protocol</a>, also offer a non-GPL option (<a href="https://hepic.tel" target="_blank">HEPIC</a>) that is specifically designed for the needs of large-scale telcos and Communications Service Providers. We highly recommend it to those who need a carrier-class monitoring and SIP capture solution.</li>
|
||||
<li>If, after reading the above, you (or the company you work for) are still scared off by the AGPL v3 license, and are not interested in <a href="https://hepic.tel" target="_blank">HEPIC</a> (have you checked it out?), then know that neither Homer nor Grafana are required components of jambonz: simply don't install them, or remove if already installed.</li>
|
||||
<li>If, after reading the above, you (or the company you work for) are still scared off by the AGPL v3 license, and are not interested in <a href="https://hepic.tel" target="_blank">HEPIC</a> (have you checked it out?), then know that neither Homer nor Grafana are required components of jambonz - simply don't install them, or remove if already installed.</li>
|
||||
</ol>
|
||||
@@ -11,7 +11,7 @@ Here are some easy (i.e. no cost) ways you can contribute:
|
||||
- Have coding skills (Node.js, React, Voip/SIP, or C++)? We'd love your help as a code contributor.
|
||||
- Evangelize. Talk the project up to your friends and business contacts. Allow us to use your success stories in our marketing materials.
|
||||
|
||||
All of these can be really helpful in moving a project forward.
|
||||
All of these things can be really helpful in moving a project forward.
|
||||
|
||||
There are also several easy ways you can provide direct financial support to the project:
|
||||
|
||||
@@ -24,4 +24,6 @@ Finally, we encourage you to think about supporting open-source RTC projects mor
|
||||
|
||||
In fact, we don't think you should think of supporting open-source in a purely symmetrical fashion - e.g., "I use project A, so I should figure out how to send money to project A".
|
||||
|
||||
We'd encourage you to think in a more asymmetric fashion. Supporting project A is great, but if you can't manage that, perhaps you can send some of your staff to an open-source conference this year, or maybe even provide sponsorship for one. Or maybe you can allow project A to use your experience publicly as a success story on their web site. The point is, there are many ways to support the overall ecosystem that you are now part of. All you need to do is care.
|
||||
We'd encourage you to think asymmetrically. Supporting project A is great, but if you can't manage that, perhaps you can send some of your staff to an open-source conference this year, or maybe even provide sponsorship for one. Or maybe you can allow project A to use your experience publicly as a success story on their web site. The point is, there are many ways to support the overall ecosystem that you are now a part of.
|
||||
|
||||
All **you** need to do, is care.
|
||||
|
||||
@@ -8,9 +8,9 @@ We believe that the use and delivery of open source in RTC should be treated as
|
||||
|
||||
The health of the RTC open source ecosystem, like any ecosystem, is inherently fragile and should not be taken for granted.
|
||||
|
||||
To that end, we believe that all companies in the RTC space that are engaged with open source -- both consumers and makers -- should provide an annual report summarizing their commitment to the open source ecosystem.
|
||||
To that end, we believe that all companies in the RTC space that are engaged with open source -- both consumers and makers -- should provide a report summarizing their commitment to the open source ecosystem, on at least an annual basis.
|
||||
|
||||
As a point of comparison, many companies feel that they have a social mission responsibility and will report on the actions they've taken to fulfill that responsibility apart and aside from their financial results. Our shared responsibility to the stewardship of the open source RTC ecosystem should be treated in the same manner.
|
||||
As a point of comparison, many companies feel that they have a social mission responsibility, and will report on the actions they've taken to fulfill that responsibility apart and aside from their financial results. Our shared responsibility to the stewardship of the open source RTC ecosystem should be treated in the same respectful manner.
|
||||
|
||||
On these pages you will find our own statement of commitment as well as instructions on how to install and use the open source.
|
||||
|
||||
|
||||
57
markdown/open-source/install/aws.md
Normal file
57
markdown/open-source/install/aws.md
Normal file
@@ -0,0 +1,57 @@
|
||||
# Installing on AWS
|
||||
|
||||
The quickest way for you to try out jambonz is to create an account on [jambonz.us](https://jambonz.us/register). This gets you up and running with a few clicks of the mouse, and all of your applications can later be re-pointed to a self-hosted system that you build up.
|
||||
|
||||
When you are ready to build your own system, AWS is the recommended hosting provider for jambonz at the present time, because a lot of work has been done to integrate to AWS autoscaling groups and other resources that make deployment and management of a jambonz cluster easy.
|
||||
|
||||
> We intend to add similar scaling support for the other leading hosting providers in the near future. If you want to run on a different public cloud and are willing to sponsor the work to make it happen, please contact us.
|
||||
|
||||
There are two supported methods for deploying a jambonz system in your AWS account
|
||||
|
||||
##### AWS Marketplace
|
||||
|
||||
You can deploy a single-server jambonz "mini" system on AWS in a snap by [clicking here](https://aws.amazon.com/marketplace/pp/prodview-7lmody7uv2sye). This AMI is available in all AWS regions and is a great way to quickly stand up a low-cost jambonz system for testing or development purposes.
|
||||
|
||||
> Coming soon on AWS Marketplace: we will be offering additional jambonz subscriptions that are suited for a wider variety of deployments - alongside with the "mini" you will be able to choose from a small, medium, or large deployment (just like buying an ice cream cone!).
|
||||
|
||||
A few notes when spinning up the AMI for the first time:
|
||||
|
||||
<ul>
|
||||
<li>After the AMI is running for the first time, wait a minute or two before trying to access the portal. There are some userdata scripts that need to finish running to configure the webapp for use. If you attempt to log in before it is complete, you will get a 502 Bad Gateway response. If this happens, just wait a minute or two and try again.</li>
|
||||
<li>The output variables on the AWS console after the AMI has been deployed will give you the URL of the portal and the username and password to use. The username will be 'admin' and the initial password will the AWS host-id of the EC2 instance. You will be forced to set a new password when you log in for the first time.</li>
|
||||
</ul>
|
||||
|
||||
##### Terraform and packer scripts
|
||||
|
||||
A second option is to use our packer and terraform scripts to deploy a jambonz system on AWS. This is a bit more work, because you need to build your own AMIs. You will use packer to build two AMIs (and SBC/web server and a Feature Server), and then you will use terraform to deploy a jambonz system with those AMIs.
|
||||
|
||||
Here is what you will need to do:
|
||||
|
||||
###### Build AMIs
|
||||
|
||||
Check out [jambonz-infrastructure](https://github.com/jambonz/jambonz-infrastructure) repo to your local machine. Make sure you have installed the AWS CLI as well as [packer](https://www.packer.io/) and [terraform](https://www.terraform.io/).
|
||||
|
||||
<ul>
|
||||
<li>change into the ./packer/jambonz-sbc-sip-rtp directory of the repo. Edit the <a href="https://github.com/jambonz/jambonz-infrastructure/blob/0692528616a7ddf3b4b113cc0f1362f4e47fcc36/packer/jambonz-sbc-sip-rtp/template.json#L3">region variable</a> in the template.json file to indicate the region where you want to build the AMIs, if different than us-east-1.</li>
|
||||
<li>in a terminal window in the `./packer/jambbonz-sbc-sip-rtp` directory, run the command <br/><code>packer build -color=false template.json</code></br>This will create a new AMI for the SBC function under your account in the specified region. Make note the of the AMI id.</li>
|
||||
<li>change into the `./packer/jambbonz-feature-server` directory of the repo and repeat the steps above to build a second AMI for the feature server element. Make note of this AMI id as well.</li>
|
||||
</ul>
|
||||
|
||||
###### Modify terraform script and deploy
|
||||
|
||||
- change into the ./terraform/jambonz-devtest directory
|
||||
- edit [variables.tf](https://github.com/jambonz/jambonz-infrastructure/blob/master/terraform/jambonz-devtest/variables.tf). You are going to need to change the following variables:
|
||||
- "ami_owner_account" should be set to your AWS account id;
|
||||
- "region" should be set to the region you want to deploy in -- the same region the AMIs are in; "public_subnets" will need to be modified as well to have the name of the subnets in your desired region.
|
||||
- change "ec2_instance_type_sbc" and "ec2_instance_type_fs" to the AMI ids that you built in the previous step, and
|
||||
- change "key_name" and "ssh_key_path" to your ssh key-pair name and path.
|
||||
|
||||
At that point you can run the terraform script:
|
||||
|
||||
```bash
|
||||
terraform init
|
||||
terraform apply
|
||||
```
|
||||
This will build a VPC with associated subnets and internet gateway, security groups etc, and will create one EC2 instance that is the SBC and web server, and a second EC2 instance that is the feature server. The feature server will be in an autoscale group. Also, an aurora serverless mysql database and a redis elasticache service will be created.
|
||||
|
||||
After deployment, you can log into the portal at http://<sbc-public-ip>:80 and log in as admin/admin. You will be forced to change the password after you first log in.
|
||||
@@ -1 +0,0 @@
|
||||
# How to install from open source
|
||||
579
markdown/open-source/install/self-hosted.md
Normal file
579
markdown/open-source/install/self-hosted.md
Normal file
@@ -0,0 +1,579 @@
|
||||
# Building a self-hosted solution (not on AWS)
|
||||
|
||||
If you are using your own hardware, or a hosting provider other than AWS, there is a little more elbow grease required. Follow the instructions below to create a jambonz deployment consisting of one SBC and one Feature Server. You will also be provisioning a mysql server and a redis server.
|
||||
|
||||
### A. Provision servers
|
||||
You'll need two servers -- one will be the public-facing SBC, while the other will be the feature server. The SBC must have a public address; the Feature Server does not necessarily need a public address, but of course will need connectivity to the SBC, the mysql database, the redis server, and outbound connectivity to the internet in order to complete the install.
|
||||
|
||||
If desired, you can install mysql and redis on the SBC server, but as long as they are reachable from both the SBC and the Feature Server you'll be fine. We will be using ansible to build up the servers, which means from your laptop you need ssh connectivity to both the SBC and the Feature Server.
|
||||
|
||||
The base software distribution for both the SBC and the Feature Server should be Debian 9. A vanilla install that includes sudo and python is all that is needed (python is used by [ansible](https://www.ansible.com/), which we will be using to build up the servers in the next step).
|
||||
|
||||
### B. Use ansible to install base software
|
||||
If you don't have ansible installed on your laptop, install it now [following these instructions](https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html).
|
||||
|
||||
Check out the following github repos to your laptop:
|
||||
|
||||
[ansible-role-drachtio](https://github.com/davehorton/ansible-role-drachtio)
|
||||
|
||||
[ansible-role-fsmrf](https://github.com/davehorton/ansible-role-fsmrf)
|
||||
|
||||
[ansible-role-nodejs](https://github.com/davehorton/ansible-role-nodejs)
|
||||
|
||||
[ansible-role-rtpengine](https://github.com/davehorton/ansible-role-rtpengine)
|
||||
|
||||
For the SBC, create an ansible playbook that looks like this, and run it:
|
||||
```yaml
|
||||
---
|
||||
- hosts: all
|
||||
become: yes
|
||||
vars:
|
||||
drachtioBranch: develop
|
||||
rtp_engine_version: mr8.5
|
||||
vars_prompt:
|
||||
- name: "cloud_provider"
|
||||
prompt: "Cloud provider: aws, gcp, azure, digital_ocean"
|
||||
default: none
|
||||
private: no
|
||||
roles:
|
||||
- ansible-role-drachtio
|
||||
- ansible-role-nodejs
|
||||
- ansible-role-rtpengine
|
||||
```
|
||||
|
||||
and for the Feature Server, create an ansible playbook that looks like this, and run it:
|
||||
```yaml
|
||||
---
|
||||
- hosts: all
|
||||
become: yes
|
||||
vars:
|
||||
drachtioBranch: develop
|
||||
build_with_grpc: true
|
||||
vars_prompt:
|
||||
- name: "cloud_provider"
|
||||
prompt: "Cloud provider: aws, gcp, azure, digital_ocean"
|
||||
default: none
|
||||
private: no
|
||||
roles:
|
||||
- ansible-role-drachtio
|
||||
- ansible-role-nodejs
|
||||
- ansible-role-fsmrf
|
||||
```
|
||||
### C. Create mysql database
|
||||
You need to install a mysql database server. Example instructions for installing mysql are provided [here](https://dev.mysql.com/downloads/).
|
||||
|
||||
Once the mysql server is installed, create a new database named 'jambones' with an associated username 'admin' and a password of your choice. For the remainder of these instructions, we'll assume a password of 'JambonzR0ck$' was assigned, but you may create a password of your own choosing.
|
||||
|
||||
Once the database and user has been created, then create [this schema](https://github.com/jambonz/jambonz-api-server/blob/master/db/jambones-sql.sql).
|
||||
|
||||
Once the database schema has been created, run [this database script](https://github.com/jambonz/jambonz-api-server/blob/master/db/create-admin-token.sql) as well as [this database script](https://github.com/jambonz/jambonz-api-server/blob/master/db/create-default-account.sql) to seed the database with initial data.
|
||||
|
||||
### D. Create redis server
|
||||
Install redis somewhere in your network by following [these instructions](https://redis.io/topics/quickstart) and save the redis hostname that you will use to connect to it.
|
||||
|
||||
### E. Configure SBC
|
||||
Your SBC should have both a public IP and a private IP. The public IP needs to be reachable from the internet, while the private IP should be on the internal subnet, and thus reachable by the Feature Server.
|
||||
|
||||
> In the examples below, we assume that the public IP is 190.144.12.220 and the private IP is 192.168.3.11. Your IPs will be different of course, so substitute the correct IPs in the changes below.
|
||||
|
||||
#### drachtio configuration
|
||||
|
||||
In `/etc/systemd/system/drachtio.service` change this line:
|
||||
|
||||
```bash
|
||||
ExecStart=/usr/local/bin/drachtio --daemon
|
||||
```
|
||||
to this:
|
||||
|
||||
```bash
|
||||
ExecStart=/usr/local/bin/drachtio --daemon \
|
||||
--contact sip:192.168.3.11;transport=udp --external-ip 190.144.12.220 \
|
||||
--contact sip:192.168.3.11;transport=tcp \
|
||||
--address 0.0.0.0 --port 9022
|
||||
```
|
||||
**or**, if you plan on enabling Microsoft Teams routing, to this:
|
||||
```bash
|
||||
ExecStart=/usr/local/bin/drachtio --daemon \
|
||||
--contact sip:192.168.3.11;transport=udp --external-ip 190.144.12.220 \
|
||||
--contact sips:192.168.3.11:5061;transport=tls --external-ip 190.144.12.220 \
|
||||
--contact sip:192.168.3.11;transport=tcp \
|
||||
--address 0.0.0.0 --port 9022
|
||||
```
|
||||
Then, edit `/etc/drachtio/conf.xml` to uncomment the request-handler xml tag and edit it to look like this:
|
||||
```xml
|
||||
<request-handlers>
|
||||
<request-handler sip-method="INVITE">http://127.0.0.1:4000</request-handler>
|
||||
</request-handlers>
|
||||
```
|
||||
|
||||
Then, reload and restart the drachtio server
|
||||
```bash
|
||||
systemctl daemon-reload
|
||||
systemctl restart drachtio
|
||||
```
|
||||
After doing that, run `systemctl status drachtio` and check `/var/log/drachtio/drachtio.log` to verify that the drachtio server started properly and is listening on the specified IPs and ports.
|
||||
|
||||
#### rtpengine configuration
|
||||
|
||||
In `/etc/systemd/system/rtpengine.service` change this line:
|
||||
|
||||
```bash
|
||||
ExecStart=/usr/local/bin/rtpengine --interface 192.168.3.11!192.168.3.11 \
|
||||
```
|
||||
to this:
|
||||
```bash
|
||||
ExecStart=/usr/local/bin/rtpengine \
|
||||
--interface private/192.168.3.11 \
|
||||
--interface public/192.168.3.11!190.144.12.220 \
|
||||
```
|
||||
Then, reload and restart rtpengine
|
||||
```bash
|
||||
systemctl daemon-reload
|
||||
systemctl restart rtpengine
|
||||
```
|
||||
After doing that, run `systemctl status rtpengine` to verify that rtpengine is running with the defined interfaces.
|
||||
> Note: rtpengine logs to `/var/log/daemon.log`.
|
||||
|
||||
#### Install drachtio apps
|
||||
|
||||
Choose a user to install the drachtio applications under -- the instructions below assume the `admin` user; if you use a different user than edit the instructions accordingly (note: the user must have sudo priviledges).
|
||||
|
||||
Execute the following commands from the home directory of the install user:
|
||||
|
||||
```bash
|
||||
mkdir apps && cd $_
|
||||
git clone https://github.com/jambonz/sbc-outbound.git
|
||||
git clone https://github.com/jambonz/sbc-inbound.git
|
||||
git clone https://github.com/jambonz/sbc-registrar.git
|
||||
git clone https://github.com/jambonz/sbc-call-router.git
|
||||
git clone https://github.com/jambonz/jambonz-api-server.git
|
||||
git clone https://github.com/jambonz/jambonz-webapp.git
|
||||
```
|
||||
|
||||
Next, edit this file: `~/apps/jambonz-webapp/.env`. Change this:
|
||||
```bash
|
||||
REACT_APP_API_BASE_URL=http://[ip]:[port]/v1
|
||||
```
|
||||
to this:
|
||||
```bash
|
||||
REACT_APP_API_BASE_URL=http://190.144.12.220:3000/v1
|
||||
```
|
||||
> Note: again, substitute the public IP of your own SBC in the above
|
||||
|
||||
Next, from the `~/apps/` folder execute the following
|
||||
```bash
|
||||
cd sbc-inbound && sudo npm install --unsafe-perm
|
||||
cd ../sbc-outbound && sudo npm install --unsafe-perm
|
||||
cd ../sbc-registrar && sudo npm install --unsafe-perm
|
||||
cd ../sbc-call-router && sudo npm install --unsafe-perm
|
||||
cd ../jambonz-api-server && sudo npm install --unsafe-perm
|
||||
cd ../jambonz-webapp && sudo npm install --unsafe-perm && npm run build
|
||||
|
||||
sudo -u admin bash -c "pm2 install pm2-logrotate"
|
||||
sudo -u admin bash -c "pm2 set pm2-logrotate:max_size 1G"
|
||||
sudo -u admin bash -c "pm2 set pm2-logrotate:retain 5"
|
||||
sudo -u admin bash -c "pm2 set pm2-logrotate:compress true"
|
||||
|
||||
sudo chown -R admin:admin /home/admin/apps
|
||||
```
|
||||
|
||||
Next, copy this file below into `~/apps/ecosystem.config.js`.
|
||||
|
||||
**Note:** Make sure to edit the file to have the correct connectivity information for your mysql and redis servers, and also if you have installed under a user other than 'admin' make sure to update the file paths accordingly (e.g. in the properties below such as 'cwd', 'out_file' etc).
|
||||
|
||||
```js
|
||||
module.exports = {
|
||||
apps: [{
|
||||
name: 'jambonz-api-server',
|
||||
cwd: '/home/admin/apps/jambonz-api-server',
|
||||
script: 'app.js',
|
||||
out_file: '/home/admin/.pm2/logs/jambonz-api-server.log',
|
||||
err_file: '/home/admin/.pm2/logs/jambonz-api-server.log',
|
||||
combine_logs: true,
|
||||
instance_var: 'INSTANCE_ID',
|
||||
exec_mode: 'fork',
|
||||
instances: 1,
|
||||
autorestart: true,
|
||||
watch: false,
|
||||
max_memory_restart: '1G',
|
||||
env: {
|
||||
NODE_ENV: 'production',
|
||||
JAMBONES_MYSQL_HOST: '<your-mysql-host>',
|
||||
JAMBONES_MYSQL_USER: 'admin',
|
||||
JAMBONES_MYSQL_PASSWORD: 'JambonzR0ck$',
|
||||
JAMBONES_MYSQL_DATABASE: 'jambones',
|
||||
JAMBONES_MYSQL_CONNECTION_LIMIT: 10,
|
||||
JAMBONES_REDIS_HOST: '<your-redis-host>',
|
||||
JAMBONES_REDIS_PORT: 6379,
|
||||
JAMBONES_LOGLEVEL: 'info',
|
||||
JAMBONE_API_VERSION: 'v1',
|
||||
JAMBONES_CLUSTER_ID: 'jb',
|
||||
HTTP_PORT: 3000
|
||||
},
|
||||
},
|
||||
{
|
||||
name: 'sbc-call-router',
|
||||
cwd: '/home/admin/apps/sbc-call-router',
|
||||
script: 'app.js',
|
||||
instance_var: 'INSTANCE_ID',
|
||||
out_file: '/home/admin/.pm2/logs/jambonz-sbc-call-router.log',
|
||||
err_file: '/home/admin/.pm2/logs/jambonz-sbc-call-router.log',
|
||||
exec_mode: 'fork',
|
||||
instances: 1,
|
||||
autorestart: true,
|
||||
watch: false,
|
||||
max_memory_restart: '1G',
|
||||
env: {
|
||||
NODE_ENV: 'production',
|
||||
HTTP_PORT: 4000,
|
||||
JAMBONES_INBOUND_ROUTE: '127.0.0.1:4002',
|
||||
JAMBONES_OUTBOUND_ROUTE: '127.0.0.1:4003',
|
||||
JAMBONZ_TAGGED_INBOUND: 1,
|
||||
JAMBONES_NETWORK_CIDR: '192.168.0.0/16'
|
||||
}
|
||||
},
|
||||
{
|
||||
name: 'sbc-registrar',
|
||||
cwd: '/home/admin/apps/sbc-registrar',
|
||||
script: 'app.js',
|
||||
instance_var: 'INSTANCE_ID',
|
||||
out_file: '/home/admin/.pm2/logs/jambonz-sbc-registrar.log',
|
||||
err_file: '/home/admin/.pm2/logs/jambonz-sbc-registrar.log',
|
||||
exec_mode: 'fork',
|
||||
instances: 1,
|
||||
autorestart: true,
|
||||
watch: false,
|
||||
max_memory_restart: '1G',
|
||||
env: {
|
||||
NODE_ENV: 'production',
|
||||
JAMBONES_LOGLEVEL: 'info',
|
||||
DRACHTIO_HOST: '127.0.0.1',
|
||||
DRACHTIO_PORT: 9022,
|
||||
DRACHTIO_SECRET: 'cymru',
|
||||
JAMBONES_MYSQL_HOST: '<your-mysql-host>',
|
||||
JAMBONES_MYSQL_USER: 'admin',
|
||||
JAMBONES_MYSQL_PASSWORD: 'JambonzR0ck$',
|
||||
JAMBONES_MYSQL_DATABASE: 'jambones',
|
||||
JAMBONES_MYSQL_CONNECTION_LIMIT: 10,
|
||||
JAMBONES_REDIS_HOST: '<your-redis-host>',
|
||||
JAMBONES_REDIS_PORT: 6379,
|
||||
}
|
||||
},
|
||||
{
|
||||
name: 'sbc-outbound',
|
||||
cwd: '/home/admin/apps/sbc-outbound',
|
||||
script: 'app.js',
|
||||
instance_var: 'INSTANCE_ID',
|
||||
out_file: '/home/admin/.pm2/logs/jambonz-sbc-outbound.log',
|
||||
err_file: '/home/admin/.pm2/logs/jambonz-sbc-outbound.log',
|
||||
exec_mode: 'fork',
|
||||
instances: 1,
|
||||
autorestart: true,
|
||||
watch: false,
|
||||
max_memory_restart: '1G',
|
||||
env: {
|
||||
NODE_ENV: 'production',
|
||||
JAMBONES_LOGLEVEL: 'info',
|
||||
DRACHTIO_HOST: '127.0.0.1',
|
||||
DRACHTIO_PORT: 9022,
|
||||
DRACHTIO_SECRET: 'cymru',
|
||||
JAMBONES_RTPENGINES: '127.0.0.1:22222',
|
||||
JAMBONES_MYSQL_HOST: '<your-mysql-host>',
|
||||
JAMBONES_MYSQL_USER: 'admin',
|
||||
JAMBONES_MYSQL_PASSWORD: 'JambonzR0ck$',
|
||||
JAMBONES_MYSQL_DATABASE: 'jambones',
|
||||
JAMBONES_MYSQL_CONNECTION_LIMIT: 10,
|
||||
JAMBONES_REDIS_HOST: '<your-redis-host>',
|
||||
JAMBONES_REDIS_PORT: 6379
|
||||
}
|
||||
},
|
||||
{
|
||||
name: 'sbc-inbound',
|
||||
cwd: '/home/admin/apps/sbc-inbound',
|
||||
script: 'app.js',
|
||||
instance_var: 'INSTANCE_ID',
|
||||
out_file: '/home/admin/.pm2/logs/jambonz-sbc-inbound.log',
|
||||
err_file: '/home/admin/.pm2/logs/jambonz-sbc-inbound.log',
|
||||
exec_mode: 'fork',
|
||||
instances: 1,
|
||||
autorestart: true,
|
||||
watch: false,
|
||||
max_memory_restart: '1G',
|
||||
env: {
|
||||
NODE_ENV: 'production',
|
||||
JAMBONES_LOGLEVEL: 'info',
|
||||
DRACHTIO_HOST: '127.0.0.1',
|
||||
DRACHTIO_PORT: 9022,
|
||||
DRACHTIO_SECRET: 'cymru',
|
||||
JAMBONES_RTPENGINES: '127.0.0.1:22222',
|
||||
JAMBONES_MYSQL_HOST: '<your-mysql-host>',
|
||||
JAMBONES_MYSQL_USER: 'admin',
|
||||
JAMBONES_MYSQL_PASSWORD: 'JambonzR0ck$',
|
||||
JAMBONES_MYSQL_DATABASE: 'jambones',
|
||||
JAMBONES_MYSQL_CONNECTION_LIMIT: 10,
|
||||
JAMBONES_REDIS_HOST: '<your-redis-host>',
|
||||
JAMBONES_REDIS_PORT: 6379,
|
||||
JAMBONES_CLUSTER_ID: 'jb'
|
||||
}
|
||||
},
|
||||
{
|
||||
name: 'jambonz-webapp',
|
||||
script: 'npm',
|
||||
cwd: '/home/admin/apps/jambonz-webapp',
|
||||
args: 'run serve'
|
||||
}
|
||||
]
|
||||
};
|
||||
```
|
||||
|
||||
Open the following ports on the server
|
||||
|
||||
**SBC traffic allowed in**
|
||||
|
||||
| ports | transport | description |
|
||||
| ------------- |-------------| -- |
|
||||
| 3000 |tcp| REST API|
|
||||
| 3001 |tcp| provisioning GUI|
|
||||
| 5060 |udp| sip over udp|
|
||||
| 5060 |tcp| sip over tcp|
|
||||
| 5061 |tcp| sip over tls|
|
||||
| 4433 |tcp| sip over wss|
|
||||
| 40000-60000| udp| rtp |
|
||||
|
||||
Next, ssh into the server and run the following command:
|
||||
|
||||
```bash
|
||||
JAMBONES_MYSQL_HOST=<your-mysql-host> \
|
||||
JAMBONES_MYSQL_USER=admin \
|
||||
JAMBONES_MYSQL_PASSWORD=JambonzR0ck$ \
|
||||
JAMBONES_MYSQL_DATABASE=jambones \
|
||||
/home/admin/apps/jambonz-api-server/db/reset_admin_password.js"
|
||||
```
|
||||
This is a security measure to randomize some of the initial seed data in the mysql database.
|
||||
|
||||
Next, start the applications and configure them to restart on boot:
|
||||
|
||||
```bash
|
||||
sudo -u admin bash -c "pm2 start /home/admin/apps/ecosystem.config.js"
|
||||
sudo env PATH=$PATH:/usr/bin /usr/lib/node_modules/pm2/bin/pm2 startup systemd -u admin --hp /home/admin
|
||||
sudo -u admin bash -c "pm2 save"
|
||||
sudo systemctl enable pm2-admin.service
|
||||
```
|
||||
|
||||
Check to be sure they are running:
|
||||
|
||||
```bash
|
||||
pm2 list
|
||||
```
|
||||
|
||||
You should see output similar to this:
|
||||
```bash
|
||||
admin@ip-172-31-32-10:~$ pm2 list
|
||||
┌─────┬───────────────────────┬─────────────┬─────────┬─────────┬──────────┬────────┬──────┬───────────┬──────────┬──────────┬──────────┬──────────┐
|
||||
│ id │ name │ namespace │ version │ mode │ pid │ uptime │ ↺ │ status │ cpu │ mem │ user │ watching │
|
||||
├─────┼───────────────────────┼─────────────┼─────────┼─────────┼──────────┼────────┼──────┼───────────┼──────────┼──────────┼──────────┼──────────┤
|
||||
│ 7 │ jambonz-api-server │ default │ 1.1.7 │ fork │ 4494 │ 4s │ 0 │ online │ 30.4% │ 104.7mb │ admin │ disabled │
|
||||
│ 12 │ jambonz-webapp │ default │ N/A │ fork │ 4540 │ 4s │ 0 │ online │ 7.9% │ 49.9mb │ admin │ disabled │
|
||||
│ 8 │ sbc-call-router │ default │ 0.0.1 │ fork │ 4500 │ 4s │ 0 │ online │ 3.7% │ 43.8mb │ admin │ disabled │
|
||||
│ 11 │ sbc-inbound │ default │ 0.3.5 │ fork │ 4538 │ 4s │ 0 │ online │ 24.1% │ 100.3mb │ admin │ disabled │
|
||||
│ 10 │ sbc-outbound │ default │ 0.4.2 │ fork │ 4515 │ 4s │ 0 │ online │ 13.9% │ 83.3mb │ admin │ disabled │
|
||||
│ 9 │ sbc-registrar │ default │ 0.1.7 │ fork │ 4512 │ 4s │ 0 │ online │ 13.6% │ 83.0mb │ admin │ disabled │
|
||||
└─────┴───────────────────────┴─────────────┴─────────┴─────────┴──────────┴────────┴──────┴───────────┴──────────┴──────────┴──────────┴──────────┘
|
||||
Module
|
||||
┌────┬───────────────────────────────────────┬────────────────────┬───────┬──────────┬──────┬──────────┬──────────┬──────────┐
|
||||
│ id │ module │ version │ pid │ status │ ↺ │ cpu │ mem │ user │
|
||||
├────┼───────────────────────────────────────┼────────────────────┼───────┼──────────┼──────┼──────────┼──────────┼──────────┤
|
||||
│ 0 │ pm2-logrotate │ 2.7.0 │ 28461 │ online │ 1 │ 0.3% │ 80.7mb │ admin │
|
||||
└────┴───────────────────────────────────────┴────────────────────┴───────┴──────────┴──────┴──────────┴──────────┴──────────┘
|
||||
```
|
||||
|
||||
Finally, in your browser, navigate to `http://<sbc-public-ip>:3001`.
|
||||
|
||||
You should get a login page to the SBC. Log in with admin/admin. You will be asked to change the password and then be guided through an initial 3-step setup process to configuring your account, application, and SIP trunking provider.
|
||||
|
||||
### F. Configure Feature Server
|
||||
|
||||
Open the following ports on the server
|
||||
|
||||
**Feature server traffic allowed in**
|
||||
|
||||
> Note: all of the ports below need to be open for traffic sent from a source IP that is within the local network. Traffic from the internet to these ports can be blocked.
|
||||
|
||||
| ports | transport | description |
|
||||
| ------------- |-------------| -- |
|
||||
| 3000 | tcp | REST API |
|
||||
| 5060 |udp| sip |
|
||||
| 5060 |tcp| sip |
|
||||
| 5080 |udp| freeswitch sip |
|
||||
| 5080 |tcp| freeswitch sip |
|
||||
| 25000 - 40000 |udp| rtp |
|
||||
|
||||
In the file `/usr/local/freeswitch/conf/autoload_configs/switch.conf.xml` set the rtp port range to be 25000 through 39000 by editing the 'rtp-start-port' and 'rtp-end-port' as follows:
|
||||
```xml
|
||||
<!-- RTP port range -->
|
||||
<param name="rtp-start-port" value="25000"/>
|
||||
<param name="rtp-end-port" value="39000"/>
|
||||
```
|
||||
|
||||
In the file `/usr/local/freeswitch/conf/autoload_configs/event_socket.conf.xml` replace the contents with:
|
||||
```
|
||||
<configuration name="event_socket.conf" description="Socket Client">
|
||||
<settings>
|
||||
<param name="nat-map" value="false"/>
|
||||
<param name="listen-ip" value="0.0.0.0"/>
|
||||
<param name="listen-port" value="8021"/>
|
||||
<param name="password" value="JambonzR0ck$"/>
|
||||
<param name="apply-inbound-acl" value="socket_acl"/>
|
||||
</settings>
|
||||
</configuration>
|
||||
```
|
||||
> Note: Feel free to choose a different password if you like.
|
||||
|
||||
In the file `/etc/systemd/system/freeswitch.service` make sure the following Environment variables are set:
|
||||
```bash
|
||||
[Service]
|
||||
; service
|
||||
Type=forking
|
||||
PIDFile=/usr/local/freeswitch/run/freeswitch.pid
|
||||
EnvironmentFile=-/etc/default/freeswitch
|
||||
Environment="MOD_AUDIO_FORK_SUBPROTOCOL_NAME=audio.jambonz.org"
|
||||
Environment="MOD_AUDIO_FORK_SERVICE_THREADS=1"
|
||||
Environment="MOD_AUDIO_FORK_BUFFER_SECS=3"
|
||||
Environment="LD_LIBRARY_PATH=/usr/local/lib"
|
||||
Environment="GOOGLE_APPLICATION_CREDENTIALS=/home/admin/credentials/gcp.json"
|
||||
ExecStart=/usr/local/freeswitch/bin/freeswitch -nc -nonat
|
||||
```
|
||||
|
||||
#### Install drachtio apps
|
||||
|
||||
Choose a user to install the drachtio applications under -- the instructions below assume the `admin` user; if you use a different user than edit the instructions accordingly (note: the user must have sudo priviledges).
|
||||
|
||||
Execute the following commands from the home directory of the install user:
|
||||
|
||||
```bash
|
||||
mkdir apps credentials
|
||||
cd apps
|
||||
git clone https://github.com/jambonz/jambonz-feature-server.git
|
||||
git clone https://github.com/jambonz/fsw-clear-old-calls.git
|
||||
cd jambonz-feature-server && sudo npm install --unsafe-perm
|
||||
cd ../fsw-clear-old-calls && npm install && sudo npm install -g .
|
||||
echo "0 * * * * root fsw-clear-old-calls --password JambonzR0ck$ >> /var/log/fsw-clear-old-calls.log 2>&1" | sudo tee -a /etc/crontab
|
||||
sudo -u admin bash -c "pm2 install pm2-logrotate"
|
||||
sudo -u admin bash -c "pm2 set pm2-logrotate:max_size 1G"
|
||||
sudo -u admin bash -c "pm2 set pm2-logrotate:retain 5"
|
||||
sudo -u admin bash -c "pm2 set pm2-logrotate:compress true"
|
||||
sudo chown -R admin:admin /home/admin/apps
|
||||
```
|
||||
> Note: if you chose a different Freeswitch password, make sure to adjust the crontab entry above to use that password.
|
||||
|
||||
Next, copy your google service credentials json file into `/home/admin/credentials/gcp.json`. Note that this is referenced from the Environment variable that you set in the freeswitch systemd service file.
|
||||
|
||||
Next, copy this file below into `~/apps/ecosystem.config.js`.
|
||||
|
||||
**Note:** Make sure to edit the file below to have the correct information for:
|
||||
|
||||
- your mysql and redis server hosts,
|
||||
- your AWS access key, secret access key, and region
|
||||
- your mysql and freeswitch passwords, if different than below
|
||||
- the IP address of the SBC on the internal network,
|
||||
- the network CIDR of the internal network, and
|
||||
- if you have installed under a user other than 'admin' make sure to update the file paths accordingly (e.g. in the properties below such as 'cwd', 'out_file' etc).
|
||||
|
||||
```js
|
||||
module.exports = {
|
||||
apps : [
|
||||
{
|
||||
name: 'jambonz-feature-server',
|
||||
cwd: '/home/admin/apps/jambonz-feature-server',
|
||||
script: 'app.js',
|
||||
instance_var: 'INSTANCE_ID',
|
||||
out_file: '/home/admin/.pm2/logs/jambonz-feature-server.log',
|
||||
err_file: '/home/admin/.pm2/logs/jambonz-feature-server.log',
|
||||
exec_mode: 'fork',
|
||||
instances: 1,
|
||||
autorestart: true,
|
||||
watch: false,
|
||||
max_memory_restart: '1G',
|
||||
env: {
|
||||
NODE_ENV: 'production',
|
||||
GOOGLE_APPLICATION_CREDENTIALS: '/home/admin/credentials/gcp.json',
|
||||
AWS_ACCESS_KEY_ID: '<your-aws-access-key-id>',
|
||||
AWS_SECRET_ACCESS_KEY: '<your-aws-secret-access-key>',
|
||||
AWS_REGION: 'us-west-1',
|
||||
JAMBONES_NETWORK_CIDR: '192.168.0.0/16',
|
||||
JAMBONES_MYSQL_HOST: '<your-mysql-host>',
|
||||
JAMBONES_MYSQL_USER: 'admin',
|
||||
JAMBONES_MYSQL_PASSWORD: 'JambonzR0ck$',
|
||||
JAMBONES_MYSQL_DATABASE: 'jambones',
|
||||
JAMBONES_MYSQL_CONNECTION_LIMIT: 10,
|
||||
JAMBONES_REDIS_HOST: '<your-redis-host>',
|
||||
JAMBONES_REDIS_PORT: 6379,
|
||||
JAMBONES_LOGLEVEL: 'info',
|
||||
HTTP_PORT: 3000,
|
||||
DRACHTIO_HOST: '127.0.0.1',
|
||||
DRACHTIO_PORT: 9022,
|
||||
DRACHTIO_SECRET: 'cymru',
|
||||
JAMBONES_SBCS: '192.168.3.11',
|
||||
JAMBONES_FEATURE_SERVERS: '127.0.0.1:9022:cymru',
|
||||
JAMBONES_FREESWITCH: '127.0.0.1:8021:JambonzR0ck$'
|
||||
}
|
||||
}]
|
||||
};
|
||||
```
|
||||
|
||||
Next, start the applications and configure them to restart on boot:
|
||||
|
||||
```bash
|
||||
sudo -u admin bash -c "pm2 start /home/admin/apps/ecosystem.config.js"
|
||||
sudo env PATH=$PATH:/usr/bin /usr/lib/node_modules/pm2/bin/pm2 startup systemd -u admin --hp /home/admin
|
||||
sudo -u admin bash -c "pm2 save"
|
||||
sudo systemctl enable pm2-admin.service
|
||||
```
|
||||
|
||||
Check to be sure they are running:
|
||||
|
||||
```bash
|
||||
pm2 list
|
||||
```
|
||||
|
||||
You should see output similar to this:
|
||||
```bash
|
||||
admin@ip-172-31-33-250:~$ pm2 list
|
||||
┌─────┬───────────────────────────┬─────────────┬─────────┬─────────┬──────────┬────────┬──────┬───────────┬──────────┬──────────┬──────────┬──────────┐
|
||||
│ id │ name │ namespace │ version │ mode │ pid │ uptime │ ↺ │ status │ cpu │ mem │ user │ watching │
|
||||
├─────┼───────────────────────────┼─────────────┼─────────┼─────────┼──────────┼────────┼──────┼───────────┼──────────┼──────────┼──────────┼──────────┤
|
||||
│ 1 │ jambonz-feature-server │ default │ 0.2.3 │ fork │ 22438 │ 47h │ 6 │ online │ 0.2% │ 85.4mb │ admin │ disabled │
|
||||
└─────┴───────────────────────────┴─────────────┴─────────┴─────────┴──────────┴────────┴──────┴───────────┴──────────┴──────────┴──────────┴──────────┘
|
||||
Module
|
||||
┌────┬───────────────────────────────────────┬────────────────────┬───────┬──────────┬──────┬──────────┬──────────┬──────────┐
|
||||
│ id │ module │ version │ pid │ status │ ↺ │ cpu │ mem │ user │
|
||||
├────┼───────────────────────────────────────┼────────────────────┼───────┼──────────┼──────┼──────────┼──────────┼──────────┤
|
||||
│ 0 │ pm2-logrotate │ 2.7.0 │ 1015 │ online │ 0 │ 0.1% │ 66.4mb │ admin │
|
||||
└────┴───────────────────────────────────────┴────────────────────┴───────┴──────────┴──────┴──────────┴──────────┴──────────┘
|
||||
```
|
||||
|
||||
Finally, restart the drachtio and freeswitch services:
|
||||
```bash
|
||||
sudo systemctl daemon-reload
|
||||
sudo systemctl restart freeswitch
|
||||
sudo systemctl restart drachtio
|
||||
```
|
||||
For good measure, restart the drachtio apps as well
|
||||
```bash
|
||||
pm2 restart /home/admin/apps/ecosystem.config.js
|
||||
```
|
||||
|
||||
Now you should have a running system. Verify the drachtio server and freeswitch are running
|
||||
```bash
|
||||
sudo systemctl status drachtio
|
||||
sudo systemctl status freeswitch
|
||||
```
|
||||
Verify the apps are running and are not logging any errors:
|
||||
```bash
|
||||
pm2 list
|
||||
pm2 log
|
||||
```
|
||||
|
||||
Finally, tail the `/var/log/drachtio/drachtio.log` file and verify that sip OPTIONS requests are being sent to the SBC and are receiving a 200 OK response.
|
||||
|
||||
At this point, your system is ready for testing.
|
||||
Reference in New Issue
Block a user