diff --git a/oblig3/oblig3.md b/oblig3/oblig3.md index ea02af6cdc4f69ce3ec5afe76e68196004f75099..8c60cc1d85a534762871b51a2256c81bda4ef847 100644 --- a/oblig3/oblig3.md +++ b/oblig3/oblig3.md @@ -1,6 +1,9 @@ -# Week 10 - Gluster File Storage -## Gluster installation -### Task 1 +# Group 45 DCSG2003 Oblig 3 +*Raphael Storm Larsen, Sara Stentvedt Luggenes* + +## Week 10 - Gluster File Storage +### Gluster installation +#### Task 1 We start by installing Gluster on all three severs: ``` apt-get -y install glusterfs-server glusterfs-client @@ -8,7 +11,7 @@ systemctl enable glusterd systemctl start glusterd ``` -### Task 2 +#### Task 2 Then we make the bricks(byggeklosser). These bricks is just folders for the distributed storage, and in the real world it should have been real harddrives. We use ordinary folders since we use VMs, and dont want to use a lot on resources on volumes. We make a folder where the images in bookface is stored, and another folder for configuration(we do this on all three servers): ``` mkdir /bf_brick @@ -20,7 +23,7 @@ mkdir /bf_images mkdir /bf_config ``` -### Task 3 +#### Task 3 Then we have to add two of the servers to the cluster, we do this commands on server 1: ``` gluster peer probe 192.168.134.194 @@ -42,7 +45,7 @@ Uuid: 4fef7622-32ad-4ea1-bde3-2bab6e71ac6f State: Peer in Cluster (Connected) ``` -### Task 4 +#### Task 4 Now we create the two volumes to be distributed. We call one volume "bf_images" and the other one "bf_config". The following commands have to be done at only one server, we do it on server 1: ``` gluster volume create bf_images replica 3 192.168.134.127:/bf_brick 192.168.134.194:/bf_brick 192.168.130.56:/bf_brick force @@ -52,7 +55,7 @@ gluster volume start bf_config ``` The ip-addresses above is for server 1, server 2 and server 3. The data in bf_images and bf_config is gonna be replicated 3 times, one for each server. This is not space efficient, but in environments where high-availability and high-reliability are critical, it is the most optimal solution. -### Task 5 +#### Task 5 To check whether the volumes now is a part of the file system, we run the command ```df -h ``` on server 1: ``` Filesystem Size Used Avail Use% Mounted on @@ -93,7 +96,7 @@ chmod 777 /bf_config chmod 777 /bf_images ``` -### Task 6 +#### Task 6 When a server is rebooting, we have to make sure that the disks are mountet correctly on the server. We have made a script called boot.sh that will take care of the mounting on reboot. The script is run automaticly on every reboot, because of crontab. Below is the crontab file we have on our three servers: ``` @@ -155,9 +158,9 @@ systemctl start docker The script boot.sh will try to mount the disks of they are not already mountet(probably not). Then the script will sleep for 5 seconds, to give the commands time to finish. Then an if-sentence will check whether the volumes are mounted or not, and send a message to our Bookface Discord-webhook. After the disks are mountet(or not), cockroach will start. In the case of a failed mount, this script will simply exit without attempting again. This was done to avoid infinite looping, and in the case that the mount has failed, we suspect that direct involvement is neccecary anyways. As such it's easier to simply have the script ping our discord in the case of an error. -# Week 11 - Docker Swarm +## Week 11 - Docker Swarm -## Task 1 +### Task 1 Install Docker on all the servers with these commands: ``` @@ -175,7 +178,7 @@ To make sure that we are able to download Docker images from other places, we ha Then restart Docker: ```systemctl restart docker``` -## Task 2 +### Task 2 Docker starts as a service automaticly every time the server is rebooting. We dont want this, because we dont want a server to run any containers before Gluster is connected properly. We therefore have to turn of the automatic restart of Gluster when the server is rebooting. We do this by running these commands: ``` @@ -189,7 +192,7 @@ systemctl start docker ``` This line will start Docker every time the script is run, but after the volumes are mounted and everything is ok. -## Task 3 - Make the Docker swarm +### Task 3 - Make the Docker swarm Now we are ready to create the swarm. We run this code on server 1: ```docker swarm init```. Server 1 is now going to be the manager in the swarm, server 2 and server 3 is going to be the workers. When we run the docker swarm init command, we get the following output(on server 1): ``` @@ -197,14 +200,14 @@ docker swarm join --token SWMTKN-1-5msr1ct8s2de7vrwkj9c2zzcx50gvvg1e1ty8vvc10eiy ``` This code we run on server 2 and server 3. This will make the other servers join the swarm, with the role of workers. -## Task 4 - Download Bookface repo +### Task 4 - Download Bookface repo Now we need to download the Bookface git-repo on server 1: ``` git clone https://github.com/hioa-cs/bookface.git ``` -## Task 5 - Make a haproxy file to the database +### Task 5 - Make a haproxy file to the database Then we have to make a haproxy file in the bookface folder on server 1: ``` @@ -259,7 +262,7 @@ backend databases server server3 192.168.130.56:26257 ``` -## Task 6 - Start bookface +### Task 6 - Start bookface Bookface is ready to get started on server 1: ``` @@ -306,7 +309,7 @@ load time: 0s </html> ``` -## Task 7 - Database migration +### Task 7 - Database migration Now we will import the data from our current bookface setup, to the new setup. First we have to fix a small issue from week 9. The table "posts" did not include a column for pictures, this we have to fix. Go to the database in the new cluster(on server 1): ``` @@ -350,7 +353,7 @@ In the command above, entrypoint means the ip address to where the help-containe Now, the bookface data is gonna be replicated to all three server thanks to GlusterFS. This command is gonna take a while, probably half an hour or so. -## Task 8 - Openstack balancer +### Task 8 - Openstack balancer Now we have to stop using the old balancer server, and use the balanser functionality on Openstack instead. This is for efficiency and simplicity. If we still use the old balancer server, bookface is gonna shut down if the balancer shuts down. We avoid this by using the openstack balancer. @@ -425,7 +428,7 @@ We connect the floating_ip_id from above and the vip_port_id together: openstack floating ip set --port 255d0c0d-5e31-4c6b-9e46-c806663fa748 5337587f-47b4-4d98-922e-981ed2231c28 ``` -## Task 9 - Go live! +### Task 9 - Go live! We change the floating ip address the uptime challenge should be registered with. The ip address below is the floating ip to the openstack load balancer: ``` @@ -434,20 +437,20 @@ uc status uc traffic on ``` -# Week 12 - Capacity calculation through statistical distributions -## Task 1 -### 50k.dat +## Week 12 - Capacity calculation through statistical distributions +### Task 1 +#### 50k.dat <img src=./bilder/week12_task1.1.PNG> <img src=./bilder/week12_task1.2.PNG> 1) By analysing the images above, there is a large range in how many users the game is hosting throughout the given timeperiod. It is a range of 44425 users, from most to least users at a given time, where the maximum value is 44876 and the minimum value is 451. The value 451 is probably indicating some sort of downtime for the game, while the value 44876 tells us how many users the servers should be able to host at any given time, since this is the extreme end of the scale. By looking at the 5% table in the middle of the document, we can see that there is 29 000 users in the game or less, in 95% of the time. In 5% of the time, there is approxomately 30 000 or more users, which make this large number of users pretty rare. -### Begin.dat +#### Begin.dat <img src=./bilder/week12_task1.3.PNG> <img src=./bilder/week12_task1.4.PNG> -### End.dat +#### End.dat <img src=./bilder/week12_task1.5.PNG> <img src=./bilder/week12_task1.6.PNG> @@ -455,7 +458,7 @@ uc traffic on 3) Based on the data given in the begin.dta and end.dta, the popularity of the game have increased with approxomately 18822 - 13139 = 5683 users over the timeperiod. -## Task 2 +### Task 2 We run the following command and get the following output in JSON format: @@ -624,7 +627,7 @@ fi This script scales the amount of frontpage users up and down, depending on factors such as download time, amount of frontpage users, and our own personal preferences. We add this script to crontab, and it is run every 15 minute. The way this script is configured above, will make bookface automaticly add frontpage users by 25 if the download time is 1.9 secunds or less. If the download time is more than 5.0 seconds, bookface will automaticly remove 50 frontpage users, and by this reduce the amount of frontpage users. -## Task 3 +### Task 3 We need to get the download time for bookface the last 24 hours. First, we run this command and get the data in this format(the command and output is for the last 1 hour only): ``` @@ -716,10 +719,10 @@ By looking at the images above, we can see that mean download time for our bookf -### Consul Setup +### Week 15: Consul Setup -IP: 192.168.134.127 -DATACENTRE: AETA +- ***IP:*** 192.168.134.127 +- ***DATACENTRE:*** AETA ``` docker run -d -v /opt/consul:/consul/data --name=consul --net=host -e 'CONSUL_LOCAL_CONFIG={"skip_leave_on_interrupt": true}' -e 'CONSUL_BIND_INTERFACE=ens3' 192.168.128.23:5000/consul:latest agent -server -bind=192.168.134.127 --datacenter=AETA -client=0.0.0.0 -ui -bootstrap @@ -732,23 +735,17 @@ curl -X PUT -d '{"ID": "SERVER.AETA", "Name": "bookface", "Address": "192.168.13 docker run -d -v /opt/consul:/consul/data --name=consul --net=host -e 'CONSUL_LOCAL_CONFIG={"skip_leave_on_interrupt": true}' 192.168.128.23:5000/consul:latest agent -server -join=192.168.134.127 --datacenter=AETA -client=0.0.0.0 -bind=192.168.134.194 curl -X PUT -d '{"ID": "SERVER2.AETA", "Name": "bookface", "Address": "192.168.132.39", "Port": 80 }' http://localhost:8500/v1/agent/service/register - - # Server 3 docker run -d -v /opt/consul:/consul/data --name=consul --net=host -e 'CONSUL_LOCAL_CONFIG={"skip_leave_on_interrupt": true}' 192.168.128.23:5000/consul:latest agent -server -join=192.168.134.127 --datacenter=AETA -client=0.0.0.0 -bind=192.168.130.56 curl -X PUT -d '{"ID": "SERVER3.AETA", "Name": "bookface", "Address": "192.168.132.39", "Port": 80 }' http://localhost:8500/v1/agent/service/register - -``` - -# på en hvilkårlig server +# At any server dig @192.168.128.23 bookface.service.aeta.dcsg2003. ANY -# slår opp på egen infrastruktur, hvilkårlig server: +# At any server, finds our architechture dig @192.168.128.23 server1.node.aeta.dcsg2003. ANY -# task 6 -``` +# Task 6 curl -X PUT -d '{"ID": "SERVER.AETAlocal", "Name": "bookfacelocal", "Port": 80 }' http://127.0.0.1:8500/v1/agent/service/register curl -X PUT -d '{"ID": "SERVER2.AETAlocal", "Name": "bookfacelocal", "Port": 80 }' http://127.0.0.1:8500/v1/agent/service/register curl -X PUT -d '{"ID": "SERVER3.AETAlocal", "Name": "bookfacelocal", "Port": 80 }' http://127.0.0.1:8500/v1/agent/service/register diff --git a/oblig3/oblig3.pdf b/oblig3/oblig3.pdf new file mode 100644 index 0000000000000000000000000000000000000000..f7469d45285221a7edb1598d4e10c0bef7ccec4b Binary files /dev/null and b/oblig3/oblig3.pdf differ