{"_id":"5703c7f2903e330e002d871e","parentDoc":null,"project":"56682f6c1fb5701900f893a0","version":{"_id":"5703c7f2903e330e002d8703","__v":4,"hasDoc":true,"hasReference":true,"project":"56682f6c1fb5701900f893a0","createdAt":"2016-04-05T14:13:06.422Z","releaseDate":"2016-04-05T14:13:06.422Z","categories":["5703c7f2903e330e002d8704","5703c7f2903e330e002d8705","5703c7f2903e330e002d8706","5703c7f2903e330e002d8707","5703c7f2903e330e002d8708","5703c7f2903e330e002d8709","5703c7f2903e330e002d870a","5703c7f2903e330e002d870b","5703c7f2903e330e002d870c","573d96148ca48f320093ed5b","573dd2e38cf1492400bba6e0","57a9cc1f5b1ace0e00de743e"],"is_deprecated":false,"is_hidden":false,"is_beta":false,"is_stable":true,"codename":"","version_clean":"2.0.0","version":"2"},"__v":0,"user":"56682f4a8639090d0075933b","category":{"_id":"5703c7f2903e330e002d8708","version":"5703c7f2903e330e002d8703","project":"56682f6c1fb5701900f893a0","__v":0,"sync":{"url":"","isSync":false},"reference":false,"createdAt":"2015-12-11T17:42:23.480Z","from_sync":false,"order":5,"slug":"infrastructure","title":"Infrastructure"},"updates":[],"next":{"pages":[],"description":""},"createdAt":"2015-12-11T17:43:55.333Z","link_external":false,"link_url":"","githubsync":"","sync_unique":"","hidden":false,"api":{"settings":"","results":{"codes":[]},"auth":"required","params":[],"url":""},"isReference":false,"order":1,"body":"## Production Environment\n\nHere's an example of a minimal live/production environment:\n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/0kW5EqdYSD6oEbCRvOAY_Screen%20Shot%202015-12-11%20at%2017.49.25.png\",\n        \"Screen Shot 2015-12-11 at 17.49.25.png\",\n        \"644\",\n        \"343\",\n        \"#215468\",\n        \"\"\n      ]\n    }\n  ]\n}\n[/block]\nCritically this has a web server and database cluster which provides resilience and stability to the service. A load balancer should distribute the traffic to the web servers intelligently - only sending traffic to live nodes.\n\n## Load Balancer\n\nThe load balancer distributes traffic between the web servers providing scalability and resilience. The choice of load balancer, configuration and maintenance is left entirely to the partner.\n\n## Network Connectivity\n\nWe expect the following connectivity to be in place over a gigabit ethernet switched network:\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Function\",\n    \"h-1\": \"Inbound\",\n    \"h-2\": \"Outbound\",\n    \"0-1\": \"TCP port 80 and 443 open to all.\\n\\nSSH access from our hub server (46.137.96.109).\",\n    \"0-2\": \"All high ports (to respond to http and https requests).\\n\\nSSH and port 80 access to our hub server (46.137.96.109).\\n\\nPorts 3310, 3311, 3312 and 3313 to the DB servers.\\n\\nPort 6379 to the Redis server.\\n\\nPort 80 access to all web servers.\\n\\nAccess to the SMTP service\\nPort 80 access to the load balancer.\",\n    \"0-0\": \"Web\",\n    \"1-1\": \"TCP port 6379 from the Web servers.\\n\\nSSH access from our hub server (46.137.96.109).\",\n    \"1-2\": \"All high ports to the web servers.\\n\\nSSH and port 80 access to our hub server (46.137.96.109).\\n\\nPort 80 access to all web servers.\\n\\nAccess to the SMTP service\\nPort 80 access to the load balancer.\\n\\nPorts 3310-3313 to both DBs.\",\n    \"1-0\": \"Redis\",\n    \"2-2\": \"All high ports to web servers and DB servers.\\n\\nPort 3310,3311,3312,3313 to all DB servers.\\n\\nSSH and port 80 access to our hub server (46.137.96.109).\",\n    \"2-1\": \"TCP port 3310,3311,3312,3313 from the web servers and all DB servers.\\n\\nSSH access from our hub server (46.137.96.109).\",\n    \"3-1\": \"SMTP connections from all web servers.\",\n    \"3-0\": \"SMTP\",\n    \"2-0\": \"Database\"\n  },\n  \"cols\": 3,\n  \"rows\": 4\n}\n[/block]\nDepending on how the environment evolves, we may well require access to be opened up to different servers and ports at different times.\n\n## Monitoring\n\nWe use Zabbix to monitor live environments. In order to monitor all servers, we will need outbound access on port 10051 to IP address 46.51.192.112.\n\nIf monitoring is not required, then these ports need not be open to us.\n\n## Staging Environment\n\nThe staging environment is to provide a safe environment in which we (BaseKit and the partner) can test upgrades to the code.\n\nIn an ideal world the staging environment would be a smaller replica of the live environment, however in practice we would suggest that having a single server capable of running varnish, apache, redis and mysql is fine. This is why the memory and storage are quite high in the minimum specs above.\n\n## SMTP Service\n\nThe BaseKit application needs to send emails through an SMTP service. These emails are things such as error emails and also mails generated by customers’ websites when using the form widget. For example ‘contact us’ emails, order emails etc.\n\nWe can set the application to hit any SMTP service and can authenticate to that service.\n\nThe SMTP service must be configured to allow mail from all web servers in the cluster and also to relay mail for a particular ‘from’ address which will need to be defined by the partner.\n\nWe leave the choice of SMTP server along with the configuration and maintenance entirely up to the partner.\n\n## Storage\n\nThere is a need for storage to be shared between all web servers and the Redis server. We leave the choice of this storage to the partner, but something like NFS has proven adequate in the past.\nThe shared storage should not reside on one of the web servers, but be mountable by us from all web servers and the redis server.\n\nThe databases also require storage attached to which has a very fast access rate.\n\nFor both web and DB storage we ask that whatever storage solution the partner chooses, it is expandable and has a minimum of 300 random write IOPS, preferably 500 IOPS.\n\nWe recommend using RAID 10 setup as LVM for the DB storage.\n\n## Partitioning\n\nWe ask for roughly the following partitioning layouts:\n\n### Databases\n\n50GB root partition (RAID 10)\n20GB Swap partition that is separate from the LVM created for the DB data.\n500GB RAID10 partition on a separate device\n\n### Webs\n\n50GB for the home partition (most of our footprint will be in /home/basekit/).\n\n### NTP Service\n\nPlease ensure that all servers have access to a time server and are properly synced.\n\n## DNS\n\nEach environment commissioned should have a domain. The live domain should point to the live load balancer. The staging domain should point to the staging server.\n\nThese domains need to be in place before we commence the hardware installation so that we can immediately use DNS for accurate testing and a proper user experience.\n\nWe would expect the partner to be able to register and configure their own domains.\nAll changes must be made on the DNS servers that the domain is pointing to.\n\nBelow are examples of how we recommend DNS should be set up.\n\n### Live\n\nLoad Balancer IP: 123.123.123.123\nProduction Domain: example.com\n\nRecords required:\nexample.com A => 123.123.123.123\n*.example.com A => 123.123.123.123\n\nCustomers would then point their domain to either the environment domain or to the load balancer IP:\n\ncustdomain.com A=> 123.123.123.123\n*.custdomain.com CNAME => example.com\n\n### Staging\n\nStaging Server IP: 234.234.234.234\nStaging Domain: staging-example.com\n\nRecords required:\nstaging-example.com A => 234.234.234.234\n*.staging-example.com A => 234.234.234.234\n \nAs this is a staging environment, there will be no customer domains on this environment although if any test domains are used, they will need to be set up as follows:\n\ntestdomain.com A => 234.234.234.234\n*.testdoimain.com CNAME => staging-example.com\n\n## Assistance\n\nFor any queries about infrastructure please feel free to get in touch with:\n\nAndy Waddams (Infrastructure Team Lead): [andy:::at:::basekit.com](mailto:andy@basekit.com)\nInfrastructure Team: [infrastructure@basekit.com](mailto:infrastructure@basekit.com)","excerpt":"How to deploy BaseKit on-premise","slug":"deployment-guide","type":"basic","title":"Deployment Guide"}

Deployment Guide

How to deploy BaseKit on-premise

## Production Environment Here's an example of a minimal live/production environment: [block:image] { "images": [ { "image": [ "https://files.readme.io/0kW5EqdYSD6oEbCRvOAY_Screen%20Shot%202015-12-11%20at%2017.49.25.png", "Screen Shot 2015-12-11 at 17.49.25.png", "644", "343", "#215468", "" ] } ] } [/block] Critically this has a web server and database cluster which provides resilience and stability to the service. A load balancer should distribute the traffic to the web servers intelligently - only sending traffic to live nodes. ## Load Balancer The load balancer distributes traffic between the web servers providing scalability and resilience. The choice of load balancer, configuration and maintenance is left entirely to the partner. ## Network Connectivity We expect the following connectivity to be in place over a gigabit ethernet switched network: [block:parameters] { "data": { "h-0": "Function", "h-1": "Inbound", "h-2": "Outbound", "0-1": "TCP port 80 and 443 open to all.\n\nSSH access from our hub server (46.137.96.109).", "0-2": "All high ports (to respond to http and https requests).\n\nSSH and port 80 access to our hub server (46.137.96.109).\n\nPorts 3310, 3311, 3312 and 3313 to the DB servers.\n\nPort 6379 to the Redis server.\n\nPort 80 access to all web servers.\n\nAccess to the SMTP service\nPort 80 access to the load balancer.", "0-0": "Web", "1-1": "TCP port 6379 from the Web servers.\n\nSSH access from our hub server (46.137.96.109).", "1-2": "All high ports to the web servers.\n\nSSH and port 80 access to our hub server (46.137.96.109).\n\nPort 80 access to all web servers.\n\nAccess to the SMTP service\nPort 80 access to the load balancer.\n\nPorts 3310-3313 to both DBs.", "1-0": "Redis", "2-2": "All high ports to web servers and DB servers.\n\nPort 3310,3311,3312,3313 to all DB servers.\n\nSSH and port 80 access to our hub server (46.137.96.109).", "2-1": "TCP port 3310,3311,3312,3313 from the web servers and all DB servers.\n\nSSH access from our hub server (46.137.96.109).", "3-1": "SMTP connections from all web servers.", "3-0": "SMTP", "2-0": "Database" }, "cols": 3, "rows": 4 } [/block] Depending on how the environment evolves, we may well require access to be opened up to different servers and ports at different times. ## Monitoring We use Zabbix to monitor live environments. In order to monitor all servers, we will need outbound access on port 10051 to IP address 46.51.192.112. If monitoring is not required, then these ports need not be open to us. ## Staging Environment The staging environment is to provide a safe environment in which we (BaseKit and the partner) can test upgrades to the code. In an ideal world the staging environment would be a smaller replica of the live environment, however in practice we would suggest that having a single server capable of running varnish, apache, redis and mysql is fine. This is why the memory and storage are quite high in the minimum specs above. ## SMTP Service The BaseKit application needs to send emails through an SMTP service. These emails are things such as error emails and also mails generated by customers’ websites when using the form widget. For example ‘contact us’ emails, order emails etc. We can set the application to hit any SMTP service and can authenticate to that service. The SMTP service must be configured to allow mail from all web servers in the cluster and also to relay mail for a particular ‘from’ address which will need to be defined by the partner. We leave the choice of SMTP server along with the configuration and maintenance entirely up to the partner. ## Storage There is a need for storage to be shared between all web servers and the Redis server. We leave the choice of this storage to the partner, but something like NFS has proven adequate in the past. The shared storage should not reside on one of the web servers, but be mountable by us from all web servers and the redis server. The databases also require storage attached to which has a very fast access rate. For both web and DB storage we ask that whatever storage solution the partner chooses, it is expandable and has a minimum of 300 random write IOPS, preferably 500 IOPS. We recommend using RAID 10 setup as LVM for the DB storage. ## Partitioning We ask for roughly the following partitioning layouts: ### Databases 50GB root partition (RAID 10) 20GB Swap partition that is separate from the LVM created for the DB data. 500GB RAID10 partition on a separate device ### Webs 50GB for the home partition (most of our footprint will be in /home/basekit/). ### NTP Service Please ensure that all servers have access to a time server and are properly synced. ## DNS Each environment commissioned should have a domain. The live domain should point to the live load balancer. The staging domain should point to the staging server. These domains need to be in place before we commence the hardware installation so that we can immediately use DNS for accurate testing and a proper user experience. We would expect the partner to be able to register and configure their own domains. All changes must be made on the DNS servers that the domain is pointing to. Below are examples of how we recommend DNS should be set up. ### Live Load Balancer IP: 123.123.123.123 Production Domain: example.com Records required: example.com A => 123.123.123.123 *.example.com A => 123.123.123.123 Customers would then point their domain to either the environment domain or to the load balancer IP: custdomain.com A=> 123.123.123.123 *.custdomain.com CNAME => example.com ### Staging Staging Server IP: 234.234.234.234 Staging Domain: staging-example.com Records required: staging-example.com A => 234.234.234.234 *.staging-example.com A => 234.234.234.234 As this is a staging environment, there will be no customer domains on this environment although if any test domains are used, they will need to be set up as follows: testdomain.com A => 234.234.234.234 *.testdoimain.com CNAME => staging-example.com ## Assistance For any queries about infrastructure please feel free to get in touch with: Andy Waddams (Infrastructure Team Lead): [andy@basekit.com](mailto:andy@basekit.com) Infrastructure Team: [infrastructure@basekit.com](mailto:infrastructure@basekit.com)