So it's saturday and I'm feeling like writing a tutorial!


I’m still working on nuking my shell scripts but those are configured to take delta dumps through the day!

This makes everything smart to compare against the previous delta and we can check during which time of day a system was poisoned. end of the day, if everything is fine then it takes a full day delta dump and sends that to our offsite server which lives there for around 1 month for record keeping.


So I should write this then?


yup you can :slight_smile:

but i can’t cross ques much on that … because … i have some things to complete by 27 june… so less tym on experimentations


You can go through it after 27th and we’ll be greeted with a weekend again, I’ll answer your questions about it at that time.


Can you brief that tutorial right now … lyk requirements … how many minimum server instance needed… how can i locate server in differnet geo location… is it recommended…


At least 2 Servers to balance load between +1 (relatively small) server to handle requests and SSL termination

You can locate servers anywhere but this won’t be a geographically diverse tutorial so it works with one node (or two in case you want alternative approach) handle most of the traffic and if one goes down, it’s traffic is redirected to the backup application. All of them share same database appliance I’d suggest a separate load balancer between database in case you want really strong redundancy but it is not required in all cases one DB with a local read replica can work just fine.

if your application is mission critical and has seen some downtime scares recently then it is both, recommended and necessary to have redundancy in place.


answer is no … for now…

But i really like to be mindful… before creating any project… Once we had a talk on GHK about the classified site …currently written 6k line codes 36+ scripts… for another project with same technologies … i will be using on that classified site …


Exactly, I’ve seen sites with 100k visitors doing just fine on a single bulky server with auto-reload configured for services that may crash.


which services ? nginx and php-fpm ?

auto-reload OR restart on fail ?


Not necessarily, You may have learned by now that it’s very difficult to scale PHP after a few thousand simultaneous access sessions.

However, Webservers like nginx or ELB or traefik when paired with application back ends such as nginx or docker which interface with programs written in JS or ruby/rails or go or python then you can achieve programmatic scaling going even as far as programmatically spinning up a new instance and pushing in replication script and within minutes you get a fresh new application backend to help offload some congestion.


Restart on crash basically


However, interestingly I did a proof of concept once following some internet guides where we used redis persistence to balance sessions across php as well so I guess if that can be made bulletproof and you could somehow prevent cache poisoning then php app backends could also be replicated. you however may have to set it so that all the assets are served from outside your infrastructure because those can potentially create conflicts.


Yes Google can crawl. There is some Ajax link or markup that you have to give. I came to know about this yesterday when I checked Google webmaster and it showed few duplicate urls. One was for Ajax crawl. Google it, though not expert on Angular.


Write a tut on Postgres. From past 4 hours my CPU is running at 80% and its because of some Postgres tasks going on.
How do I check which queries are eating so much CPU. Though being weekend I have only around 30-40 live visitors.

Restarted postgresql still its eating too much CPU.

I am worried because during weekdays I have like 100-200 live visitors from 10 to 6.


Added to my list, expect an overview of postgres tomorrow (it’s night now, time to sleep because I’m an acchha bachha :stuck_out_tongue: )


Will wait for it.
Please write it in simple language and treat me as a complete beginner.
Steps to debug and check. :slight_smile:


Reply with one:

I’ve only copy-pasted install instructions for Django
I do some occasional queries on Postgres but those are copy-paste
I can configure postgres to run with Django on my own but nothing more than that
I run postgres on my custom application I’m about to take a deep dive


I can configure postgres to run with Django on my own but nothing more than that

Actually I use one click install app of Digital ocean and it is automatically linked. I know nothing more than that.


Finally got the reason. Yesterday I made some changes in my pages and one particular query was eating CPU :stuck_out_tongue:

Now indexed the column in table and CPU is back to below 5%.

I think your tutorial should cover indexing, checking particular queries eating resources etc.

Lot of time wasted in Googling :slight_smile:


Now that we’re about to discuss more than just mysql I think we need a dedicated Database Discussion category where anyone can post their queries and we can find a solution together :wink: