Skip to content
Snippets Groups Projects
Commit 54985955 authored by Sara Stentvedt Luggenes's avatar Sara Stentvedt Luggenes
Browse files

oblig 3 finished!

parent 94ebc13f
No related branches found
No related tags found
No related merge requests found
......@@ -93,24 +93,281 @@ chmod 777 /bf_config
chmod 777 /bf_images
```
### Task 6
### Scripts and automation
When a server is rebooting, we have to make sure that the disks are mountet correctly on the server. We have made a script called boot.sh that will take care of the mounting on reboot. The script is run automaticly on every reboot, because of crontab. Below is the crontab file we have on our three servers:
```
# Crontab file for worker nodes, is automaticly updated
* * * * * crontab < /home/ubuntu/dcsg2003/configuration/worker/cron
@reboot bash /home/ubuntu/dcsg2003/configuration/worker/boot.sh
*/10 * * * * root ntpdate -b ntp.justervesenet.no
* * * * * cp /home/ubuntu/dcsg2003/configuration/worker/haproxy.cfg /home/ubuntu/bookface/haproxy.cfg
* * * * * cp /home/ubuntu/dcsg2003/configuration/worker/docker-compose.yml /home/ubuntu/bookface/docker-compose.yml
```
We can see in the crontab above, that a script called boot.sh is run on every rebooting on a server. The script boot.sh is shown below:
```
#!/bin/bash
source /home/ubuntu/dcsg2003/configuration/base.sh
ownIp=$(hostname -I | awk '{ print $1 }' | xargs)
# Mount volume if not already
mount -t glusterfs $ownIp:bf_config /bf_config
mount -t glusterfs $ownIp:bf_images /bf_images
chmod 777 /bf_config
chmod 777 /bf_images
sleep 5
# Check if mount is successful
diskImages="bf_images"
diskConfig="bf_config"
if [ $(df -h | grep "$diskImages" | wc -l) -ne 0 ]
then
echo "$diskImages is mounted"
else
echo "$diskImages is not mounted"
discord_log "Failed to mount disk."
exit
fi
if [ $(df -h | grep "$diskConfig" | wc -l) -ne 0 ]
then
echo "$diskConfig is mounted"
else
echo "$diskConfig is not mounted"
discord_log "Failed to mount disk."
exit
fi
# Start cockroach db
yes | bash /home/ubuntu/startdb.sh
# Start Docker
systemctl start docker
```
The script boot.sh will try to mount the disks of they are not already mountet(probably not). Then the script will sleep for 5 seconds, to give the commands time to finish. Then an if-sentence will check whether the volumes are mounted or not, and send a message to our Bookface Discord-webhook. After the disks are mountet(or not), cockroach will start. In the case of a failed mount, this script will simply exit without attempting again. This was done to avoid infinite looping, and in the case that the mount has failed, we suspect that direct involvement is neccecary anyways. As such it's easier to simply have the script ping our discord in the case of an error.
# Week 11 - Docker Swarm
## Task 1
Install Docker on all the servers with these commands:
```
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
apt-get update
apt-get install -y apt-transport-https ca-certificates curl software-properties-common python3-openstackclient python3-octaviaclient docker-ce git
```
To make sure that we are able to download Docker images from other places, we have to add the code below to the file ```/etc/docker/daemon.json``` on every server:
```
{
"insecure-registries":["192.168.128.23:5000"]
}
```
Then restart Docker: ```systemctl restart docker```
## Task 2
Docker starts as a service automaticly every time the server is rebooting. We dont want this, because we dont want a server to run any containers before Gluster is connected properly. We therefore have to turn of the automatic restart of Gluster when the server is rebooting. We do this by running these commands:
```
systemctl disable docker
systemctl disable docker.service
systemctl disable docker.socket
```
We add this line of code in the boot.sh script mentioned above:
```
systemctl start docker
```
This line will start Docker every time the script is run, but after the volumes are mounted and everything is ok.
## Task 3 - Make the Docker swarm
Now we are ready to create the swarm. We run this code on server 1: ```docker swarm init```. Server 1 is now going to be the manager in the swarm, server 2 and server 3 is going to be the workers. When we run the docker swarm init command, we get the following output(on server 1):
```
docker swarm join --token SWMTKN-1-5msr1ct8s2de7vrwkj9c2zzcx50gvvg1e1ty8vvc10eiyarfic-6x253ue84qc8psdwqjnacuwll 192.168.134.127:2377
```
This code we run on server 2 and server 3. This will make the other servers join the swarm, with the role of workers.
## Task 4 - Download Bookface repo
Now we need to download the Bookface git-repo on server 1:
```
git clone https://github.com/hioa-cs/bookface.git
```
## Task 5 - Make a haproxy file to the database
Then we have to make a haproxy file in the bookface folder on server 1:
```
cd bookface
nano haproxy.cfg
```
In the haproxy file we write this:
```
global
log 127.0.0.1 local2
pidfile /tmp/haproxy.pid
maxconn 4000
defaults
mode http
log global
option httplog
option dontlognull
option http-server-close
option forwardfor except 127.0.0.0/8
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout http-keep-alive 10s
timeout check 10s
maxconn 3000
## Additional Information
listen stats
bind *:1936
stats enable
stats uri /
stats hide-version
stats auth raphaesl:boing
stats auth saraslu:bipbop
frontend db
bind *:26257
mode tcp
default_backend databases
## Week 11 - Docker Swarm
backend databases
mode tcp
balance roundrobin
server server1 192.168.134.127:26257
server server2 192.168.134.194:26257
server server3 192.168.130.56:26257
```
## Task 6 - Start bookface
Bookface is ready to get started on server 1:
```
docker stack deploy -c docker-compose.yml bf
```
It take some time for the images to download, run this command to see the status:
```
docker service ls
```
When the output looks like the output below, the images is downloaded correctly:
```
ID NAME MODE REPLICAS IMAGE PORTS
7oq1am7tbwqq bf_db_balance replicated 1/1 192.168.128.23:5000/haproxy:latest *:1936->1936/tcp
xv2xl7xg4iqr bf_memcache replicated 1/1 192.168.128.23:5000/memcached:latest
4vef6mk83t2r bf_web replicated 3/3 192.168.128.23:5000/bf:v17 *:80->80/tcp
```
Now bookface should be operative. Check with the command below whether we get some HTML output or not:
```
curl http://localhost
```
The output looks like this, and we can verify that bookface is indeed operative.
```
Output abbreviated...
</tr>
<tr>
<! userline: 855878343670792193>
<td class=row ><a href='/showuser.php?user=855878343670792193'><img src='/images/1veBXrMLr36Wme9FIPwubGgJcgXiTw.jpg'></a></td>
<td class=row ><a href='/showuser.php?user=855878343670792193'>Canaan Lamb</a></td>
<td class=row >0</td>
</tr>
<tr>
<! userline: 855877007372419073>
<td class=lightrow ><a href='/showuser.php?user=855877007372419073'><img src='/images/6ym3wVvuAKwdLeIuLbkqgm09T5tQyq.jpg'></a></td>
<td class=lightrow ><a href='/showuser.php?user=855877007372419073'>Brinley Barber</a></td>
<td class=lightrow >0</td>
</tr>
</table>
load time: 0s
</html>
```
## Task 7 - Database migration
Now we will import the data from our current bookface setup, to the new setup. First we have to fix a small issue from week 9. The table "posts" did not include a column for pictures, this we have to fix. Go to the database in the new cluster(on server 1):
```
cockroach sql --insecure --host=localhost:26257
```
We delete the old table called posts, and make a new table with an extra column for pictures:
```
use bf;
drop table posts;
CREATE table posts ( postID INT PRIMARY KEY DEFAULT unique_rowid(), userID INT, text STRING(300), name STRING(150), image STRING(32), postDate TIMESTAMP DEFAULT NOW());
GRANT SELECT,UPDATE,INSERT on TABLE bf.* to bfuser;
```
Now we have to make it not possible to get a bookface to import from another solution without a "password". We write this in the cockroach shell:
```
insert into config ( key, value ) values ( 'migration_key', 'mjau' );
```
In the command above, "mjau" is our secret password. A bookface will not be able to import data from antother solution without this key.
Then log out from the cockroach shell.
### Database Migration
It is time to upgrade the current bookface version to version 17. Run this code on server1:
```
docker run -d --name=bf-import-helper -p 20080:80 -e BF_DB_HOST=192.168.130.246 -e BF_DB_PORT=26257 -e BF_DB_USER=AETAadmin -e BF_MEMCACHE_SERVER=192.168.130.6:11211 -e BF_DB_NAME=bf 192.168.128.23:5000/bf:v17
```
In the code above, BF_DB_HOST means old database ip adderess, BF_DB_USER means username on old database, BF_MEMCACHE_SERVER means memcache server ip address, and BF_DB_NAME means old database name.
Now verify that the docker container started with ```docker ps```, and run this command to verify that the old bookface still is up:
```
curl localhost:20080
```
After this, we are now ready to import the old bookface database to the new gluster cluster. Do this on server 1:
```
curl -s "http://localhost/import.php?entrypoint=192.168.134.127&key=mjau"
```
In the command above, entrypoint means the ip address to where the help-container is running, in this case on server 1. Key is the secret password we made earlier, mjau.
Now, the bookface data is gonna be replicated to all three server thanks to GlusterFS. This command is gonna take a while, probably half an hour or so.
## Task 8 - Openstack balancer
Now we have to stop using the old balancer server, and use the balanser functionality on Openstack instead. This is for efficiency and simplicity. If we still use the old balancer server, bookface is gonna shut down if the balancer shuts down. We avoid this by using the openstack balancer.
Kopier først "RC"-filen fra manager til server1 med scp. Inne i server1 må dere "source" filen og sjekke at alt er i orden med kommandoen:
Do this on manager: Copy the RC file from manager to server 1:
```
scp DCSG2003_V23_group45-openrc.sh ubuntu@10.212.173.5:/ubuntu
```
Source the file with this command on server 1:
```
source DCSG2003_V23_group45-openrc.sh
```
Verify that everything is ok with: ```openstack server list```. If the command exist, then it is ok.
### Openstack balancer
Then we make a loadbalancer with openstack(on server 1):
```
openstack loadbalancer create --name bflb --vip-subnet-id c3ea9f88-8381-46b0-80e0-910c676a0fbd
openstack loadbalancer show bflb
......@@ -121,10 +378,61 @@ curl -s "http://localhost/import.php?entrypoint=192.168.134.127&key=mjau"
openstack loadbalancer member create --address 192.168.130.56 --protocol-port 80 bfpool
```
We should make a new floating ip to the balancer, so we can go live with our new bookface. We make a floating ip with the command line like this:
```
openstack floating ip create --description bf_swarm_ip 730cb16e-a460-4a87-8c73-50a2cb2293f9
```
The output we get is:
```
FLOATING_IP_ID: 5337587f-47b4-4d98-922e-981ed2231c28
```
Now we go back to the load balancer and finds the id below vip_port_id:
```
ubuntu@manager:~$ openstack loadbalancer show bflb
+---------------------+--------------------------------------+
| Field | Value |
+---------------------+--------------------------------------+
| admin_state_up | True |
| availability_zone | None |
| created_at | 2023-03-21T19:24:46 |
| description | |
| flavor_id | None |
| id | 1a971daf-0e20-457e-8a91-f8e7426da4ac |
| listeners | 82440adf-195c-4a65-96ac-fc20fa7d7b59 |
| name | bflb |
| operating_status | ONLINE |
| pools | 24792824-25e5-4fc7-acef-d750681b2df5 |
| project_id | 4661480d37154198b27092c5c15a765c |
| provider | amphora |
| provisioning_status | ACTIVE |
| updated_at | 2023-03-28T12:30:25 |
| vip_address | 192.168.132.39 |
| vip_network_id | 35fe8b76-cc31-4b48-b3f0-5c48eef8d289 |
| vip_port_id | 255d0c0d-5e31-4c6b-9e46-c806663fa748 |
| vip_qos_policy_id | None |
| vip_subnet_id | c3ea9f88-8381-46b0-80e0-910c676a0fbd |
| tags | |
+---------------------+--------------------------------------+
```
The vip_port_id is 255d0c0d-5e31-4c6b-9e46-c806663fa748.
We connect the floating_ip_id from above and the vip_port_id together:
```
openstack floating ip set --port 255d0c0d-5e31-4c6b-9e46-c806663fa748 5337587f-47b4-4d98-922e-981ed2231c28
```
## Task 9 - Go live!
We change the floating ip address the uptime challenge should be registered with. The ip address below is the manager:
```
uc set endpoint 10.212.173.5
uc status
uc traffic on
```
# Week 12 - Capacity calculation through statistical distributions
## Task 1
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment