python manage.py runserver
celery -A eschernode worker -l warning -Q mnist_test
http://posidev.com/blog/2009/06/04/set-ulimit-parameters-on-ubuntu/
https://fedoraproject.org/wiki/SysVinit_to_Systemd_Cheatsheet
gunicorn, pm2, nginx, django server http://technopy.com/deploying-a-flask-application-in-production-nginx-gunicorn-and-pm2-html-2/
supervisord for celery worker pm2 for the gunicorn server
pm2 start server.sh pm2 save
git pull pm2 restart server
Stop the elasticsearch instance started with start of the machine
sudo systemctl stop elasticsearch
edit /etc/elasticsearch/elasticsearch.yml
to modify cluster name, node name and host in the end
restart elasticsearch instance with
sudo systemctl start elasticsearch
Start cerebro using the conf file in elasticsearch/cerebro.conf
from this repository, put it in conf/application.conf
you start it using the command
bin/cerebro -Dconfig.file=/some/other/dir/alternate.conf
or using the shell script cerebro.sh
using sh cerebro.sh
https://www.elastic.co/guide/en/elasticsearch/reference/current/deb.html https://www.elastic.co/guide/en/elasticsearch/reference/master/setting-system-settings.html#jvm-options https://www.digitalocean.com/community/tutorials/how-to-install-java-with-apt-get-on-ubuntu-16-04
GPU instances, installing pytorch in gpu machines
https://github.com/kevinzakka/blog-code/blob/master/aws-pytorch/install.sh
# drivers
wget http://us.download.nvidia.com/tesla/375.51/nvidia-driver-local-repo-ubuntu1604_375.51-1_amd64.deb
sudo dpkg -i nvidia-driver-local-repo-ubuntu1604_375.51-1_amd64.deb
sudo apt-get update
sudo apt-get -y install cuda-drivers
sudo apt-get update && sudo apt-get -y upgrade
nvidia-smi # to check the status of the gpus
import torch
torch.randn(2,3).cuda()
torch.cuda.is_available()
torch.cuda.device_count()
Userful links:
- https://discuss.pytorch.org/t/request-tutorial-for-deploying-on-cloud-based-virtual-machine/28/3
- https://github.com/kevinzakka/blog-code/blob/master/aws-pytorch/install.sh
- https://blog.waya.ai/quick-start-pyt-rch-on-an-aws-ec2-gpu-enabled-compute-instance-5eed12fbd168
- http://pytorch.org/
To allow all the mp4 files to be read from dashboard url
{
"Version": "2012-10-17",
"Id": "Policy1514905660784",
"Statement": [
{
"Sid": "Stmt1514905654924",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::karaka_test/*.mp4"
}
]
}
GET /filebeat*/_search
{
"query": {
"term": {"json.event": "exp_timeline"}
}
}
GET /filebeat*/_search
{
"query": {
"bool": {
"filter": [
{
"term": {
"json.event": "exp_timeline"
}
},
{
"term": {
"json.exp": "5ab4c3f68d76b7696f2b8f11"
}
}
]
}
},
"size": 100
}
GET filebeat-2018.03.23/doc/AWJSGxy8iJHeu3XJ3o--
POST filebeat-2018.03.23/doc/AWJSCfIAiJHeu3XJ3o4s/_update?pretty
{
"doc": {
"json": {
"timeline": {
"level": "success"
}
}
}
}
#AWJSCfIAiJHeu3XJ3o4t
POST filebeat-2018.03.23/doc/AWJSCfIAiJHeu3XJ3o4t/_update?pretty
{
"doc": {
"json": {
"timeline": {
"level": "success"
}
}
}
}
POST filebeat-2018.03.23/doc/AWJSNVwMiJHeu3XJ3pQx/_update?pretty
{
"doc": {
"json": {
"timeline": {
"level": "danger"
}
}
}
}
POST filebeat-2018.03.23/doc/AWJSNRhUiJHeu3XJ3pQo/_update?pretty
{
"doc": {
"json": {
"timeline": {
"level": "danger"
}
}
}
}