Say you want to replace the keyword arguments of a fuction with dict object, then do the following:
Sample_Fuction(key1="value1", key2="value2)
Sample_Dict = {"key1":"value1", "key2":"value2"}
Replace kwargs with Dict:
Sample_Function(**Sample_Dict)
Pal's Web Log
Programming - Linux, Ubuntu, Python, Django, Postgres, HTML, Cloud, Data Science etc.
Thursday, July 25, 2019
Wednesday, July 24, 2019
Git using a different SSH private key
Git, by defaut uses the ssh private key at ~/.ssh/id_rsa.
But if you want to use another key at another location, then do the following:
export GIT_SSH_COMMAND="ssh -i ~/.ssh/KeyName"
You can add this in your .bashrc file as an alias:
alias pgit='export GIT_SSH_COMMAND="ssh -i ~/.ssh/KeyName"'
But if you want to use another key at another location, then do the following:
export GIT_SSH_COMMAND="ssh -i ~/.ssh/KeyName"
You can add this in your .bashrc file as an alias:
alias pgit='export GIT_SSH_COMMAND="ssh -i ~/.ssh/KeyName"'
SSH with identity file or Private Key
SSH by default uses the private key at ~/.ssh/id_rsa.
But if you want to use a different private key to access a remote location, you need create one first using ssh-keygen. Specify the location and name (NAME).
To access using SSH, use the -i option as below:
SSH -i ~/.ssh/NAME user@remote.com
But if you want to use a different private key to access a remote location, you need create one first using ssh-keygen. Specify the location and name (NAME).
To access using SSH, use the -i option as below:
SSH -i ~/.ssh/NAME user@remote.com
Sunday, April 21, 2019
Django Sqlite3 database with write access
To use the Sqlite3 database with write permissions, the following setup is needed.
#1. First create a directory say 'db' under the Project folder.
#2. Change the settings.py to include the database section as following:
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': os.path.join(BASE_DIR,'db/db.sqlite3'),
}
}
#3. Run the python3 manage.py migrate to create the db.sqlite3 file under db directory.
#4. Change the group permissions of the folder db:
sudo chgrp -R www-data db
<Directory /Path_To_Project/db>
Require all granted
</Directory>
#1. First create a directory say 'db' under the Project folder.
#2. Change the settings.py to include the database section as following:
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': os.path.join(BASE_DIR,'db/db.sqlite3'),
}
}
#3. Run the python3 manage.py migrate to create the db.sqlite3 file under db directory.
#4. Change the group permissions of the folder db:
sudo chgrp -R www-data db
#5. Change the access permissions of the folder db:
sudo chown -R 774 db
6. In the apache site config, include the following to give access to db directory:<Directory /Path_To_Project/db>
Require all granted
</Directory>
Apache and Django running FireStore Database
To run Apache with Django needs the following setup to access FireStore Database:
In the site config file, it needs the following directive:
WSGIApplicationGroup %{GLOBAL}
All WSGI applications in this group will execute within the context of the same Python sub interpreter of the process handling the request. This is necessary to access the FireStore Database with the same Credentials.
In the site config file, it needs the following directive:
WSGIApplicationGroup %{GLOBAL}
All WSGI applications in this group will execute within the context of the same Python sub interpreter of the process handling the request. This is necessary to access the FireStore Database with the same Credentials.
Sunday, April 14, 2019
Cron Running in Terminal but not Executing
When you try a command, it may run successfully in the terminal. But when you add the same command to cron, it may not run.
The reason is that cron does not use 'bash' shell and it lacks the PATH info. Whereas the terminal uses 'bash' and it has the right path environment set for all commands. So when you add the command in the cron, add the command with full path like /usr/bin/COMMAND.
The reason is that cron does not use 'bash' shell and it lacks the PATH info. Whereas the terminal uses 'bash' and it has the right path environment set for all commands. So when you add the command in the cron, add the command with full path like /usr/bin/COMMAND.
Saturday, April 13, 2019
Google Cloud FireStore Database - Backup, Restore & Import in Python
Google Cloud FireStore Database Utilities for following functions:
- To Backup Collections
- To Restore Collection
- To Convert or Import CSV File to Collection
- To List All Collections
Installation:
- sudo pip3 install firedb
Usage Examples:
Initialize the FireStore Database
- import firedb
- db = firedb.db()
Backup:
- db.backup('collection_name')
- This will create a collection_name.json file as backup
- db.backup('col1', 'col2', 'col3')
- This will create multiple jsons files - col1.json to col3.json as backup
- db.backup(All=True)
- This will create json backup files for all collections in the database.
Restore:
- db.restore('collection_name.json')
- This will create a collection with name "collection_name"
Convert or Import from CSV:
- db.csv2collection(CSV_FileName)
- This will convert a CSV File to Collection.
- Optional keyword argument name can be supplied to assign document name.
To List all Collections:
- db.list()
Thursday, January 3, 2019
Python C Extension
Say we want to create a module named "pal" with a function called "f1" to calculate the sum of all numbers. For example to calculate numbers in 5, it does the following:
1+ (1+2) + (1+2+3) + (1+2+3+4) + (1+2+3+4+5) = 35
Step 1. Create a file called "palmodule.c" :
#include <Python.h>
static PyObject * pal_f1(PyObject *self, PyObject *args) {
int x;
PyArg_ParseTuple(args, "i", &x);
long sum = 0;
for (int C=1; C<=x; ++C) {
for (int c=1; c<C+1; ++c) {
sum = sum + c;
}
}
return Py_BuildValue("l", sum);
}
static PyMethodDef PalMethods[] = {
{"f1", pal_f1, METH_VARARGS, "To calculate the sum of numbers"},
{NULL, NULL}
};
static struct PyModuleDef palmodule = {
PyModuleDef_HEAD_INIT,
"pal", /* name of module */
"Module with Function f1", /* module documentation*/
-1, /* size of per-interpreter state or -1 if module keeps state in global variables.*/
PalMethods
};
PyMODINIT_FUNC PyInit_pal(void) {
return PyModule_Create(&palmodule);
}
Step 2. Create a file called "setup.py"
from distutils.core import setup, Extension
setup (name='pal', version='1.0', ext_modules=[Extension('pal', ['palmodule.c'])])
Step 3. Install the extension using the following command:
sudo python3 setup.py install
Step 4. Now open the python3 interpreter and do the following:
>>> from pal import f1
>>> f1(5)
35
1+ (1+2) + (1+2+3) + (1+2+3+4) + (1+2+3+4+5) = 35
Step 1. Create a file called "palmodule.c" :
#include <Python.h>
static PyObject * pal_f1(PyObject *self, PyObject *args) {
int x;
PyArg_ParseTuple(args, "i", &x);
long sum = 0;
for (int C=1; C<=x; ++C) {
for (int c=1; c<C+1; ++c) {
sum = sum + c;
}
}
return Py_BuildValue("l", sum);
}
static PyMethodDef PalMethods[] = {
{"f1", pal_f1, METH_VARARGS, "To calculate the sum of numbers"},
{NULL, NULL}
};
static struct PyModuleDef palmodule = {
PyModuleDef_HEAD_INIT,
"pal", /* name of module */
"Module with Function f1", /* module documentation*/
-1, /* size of per-interpreter state or -1 if module keeps state in global variables.*/
PalMethods
};
PyMODINIT_FUNC PyInit_pal(void) {
return PyModule_Create(&palmodule);
}
Step 2. Create a file called "setup.py"
from distutils.core import setup, Extension
setup (name='pal', version='1.0', ext_modules=[Extension('pal', ['palmodule.c'])])
Step 3. Install the extension using the following command:
sudo python3 setup.py install
Step 4. Now open the python3 interpreter and do the following:
>>> from pal import f1
>>> f1(5)
35
Tuesday, May 15, 2018
Tuesday, April 10, 2018
Apache2 configuration to run both django and flask in one server.
# Under /etc/apache2/sites-available/ add a file say multi_sites.conf with the following contents
project name of django is DP and project name of flask is FP:
<VirtualHost *>
Alias /static /../DP/static
<Directory /.../DP/static>
Require all granted
</Directory>
<Directory /../DP/DP>
<Files wsgi.py>
Require all granted
</Files>
</Directory>
WSGIDaemonProcess python-path=/../DP/
WSGIProcessGroup DP
WSGIScriptAlias / /../DP/DP/wsgi.py
</VirtualHost>
<VirtualHost *>
Alias /static /../FP/static
<Directory /../FP/static>
Require all granted
</Directory>
<Directory /../FP/>
<Files wsgi.py>
Require all granted
</Files>
</Directory>
WSGIDaemonProcess ocean python-path=/../FP/
WSGIScriptAlias / /../FP/wsgi.py
</VirtualHost>
3. Enable sites:
sudo a2ensite multi_sites.conf
sudo a2dissite 000-default.conf
sudo apache2ctl restart
Saturday, March 10, 2018
Docker-compose
#Create a docker compose NAME.yml file like below:
version: "3"
services:
web:
# replace NAME:run with your name and image details
image: NAME:run
deploy:
replicas: 1
resources:
limits:
cpus: "1"
memory: 1000M
restart_policy:
condition: on-failure
ports:
- "80:80"
networks:
- webnet
networks:
webnet:
#Then run the following command
docker-compose -f NAME.yml up
version: "3"
services:
web:
# replace NAME:run with your name and image details
image: NAME:run
deploy:
replicas: 1
resources:
limits:
cpus: "1"
memory: 1000M
restart_policy:
condition: on-failure
ports:
- "80:80"
networks:
- webnet
networks:
webnet:
#Then run the following command
docker-compose -f NAME.yml up
Friday, February 23, 2018
Tab completion in linux terminal
For tab completion, install two packages called bash-completion & bash-completion-extras
RedHat/CentOS: sudo yum install bash-completion bash-completion-extras
Ubuntu/Debian: sudo apt install bash-completion bash-completion-extras
RedHat/CentOS: sudo yum install bash-completion bash-completion-extras
Ubuntu/Debian: sudo apt install bash-completion bash-completion-extras
Tuesday, February 6, 2018
To find the time response of a website using curl
To find a website response time and other statistics, use curl command in linux with following options:
curl -s -w '\n Statistics for :%{url_effective}\n Remote IP:\t\t%{remote_ip}\n Name Lookup Time:\t%{time_namelookup}\n TCP Connect Time:\t%{time_connect}\n SSL Connect Time:\t%{time_appconnect}\n Redirect Time:\t\t%{time_redirect}\n Pre-transfer Time:\t%{time_pretransfer}\n Start-transfer Time:\t%{time_starttransfer}\n Total Time:\t\t%{time_total}\n HTTP Code:\t\t%{http_code}\n' -o /dev/null https://www.amazon.com
Statistics for : https://www.amazon.com/
Remote IP: 54.192.139.249
Name Lookup Time: 0.253042
TCP Connect Time: 0.326341
SSL Connect Time: 0.498637
Redirect Time: 0.000000
Pre-transfer Time: 0.498744
Start-transfer Time: 0.718770
Total Time: 1.270710
curl -s -w '\n Statistics for :%{url_effective}\n Remote IP:\t\t%{remote_ip}\n Name Lookup Time:\t%{time_namelookup}\n TCP Connect Time:\t%{time_connect}\n SSL Connect Time:\t%{time_appconnect}\n Redirect Time:\t\t%{time_redirect}\n Pre-transfer Time:\t%{time_pretransfer}\n Start-transfer Time:\t%{time_starttransfer}\n Total Time:\t\t%{time_total}\n HTTP Code:\t\t%{http_code}\n' -o /dev/null https://www.amazon.com
Statistics for : https://www.amazon.com/
Remote IP: 54.192.139.249
Name Lookup Time: 0.253042
TCP Connect Time: 0.326341
SSL Connect Time: 0.498637
Redirect Time: 0.000000
Pre-transfer Time: 0.498744
Start-transfer Time: 0.718770
Total Time: 1.270710
Sunday, February 4, 2018
Docker commands
#Build: To build, create a Dockerfile in a directory and inside the directory run the following command: (Change NAME with your image name)
docker build -t NAME .
#List Docker Images:
docker images
#Remove Docker Image
docker image rm -f ID
#Run: To run the container image called NAME with following options:
1. Map the host port 80 with container port 80
2. Limit memory to 1000MB
3. Limit CPU = 1
4. Assign name = NAME
--use the following command:
docker run --rm -it -p 80:80 -m 1000m --cpus=1 --name=NAME NAME
#List Docker processes
docker ps
#See a live stream of container(s) resource usage statistics (like top)
docker container stats
#Display the running processes of a container (one shot)
docker container top NAME
#Check iptables for port mapping:
sudo iptables -t nat -L -n
#Commit Container or take a snapshot while running:
docker commit NAME NAME:run
docker build -t NAME .
#List Docker Images:
docker images
#Remove Docker Image
docker image rm -f ID
#Run: To run the container image called NAME with following options:
1. Map the host port 80 with container port 80
2. Limit memory to 1000MB
3. Limit CPU = 1
4. Assign name = NAME
--use the following command:
docker run --rm -it -p 80:80 -m 1000m --cpus=1 --name=NAME NAME
#List Docker processes
docker ps
#See a live stream of container(s) resource usage statistics (like top)
docker container stats
#Display the running processes of a container (one shot)
docker container top NAME
sudo iptables -t nat -L -n
#Commit Container or take a snapshot while running:
docker commit NAME NAME:run
Dockerfile
#Create a Dockerfile in the project directory. Following is a sample Dockerfile to install ubuntu with postgres and apache
FROM ubuntu:rolling
FROM ubuntu:rolling
RUN apt update
RUN apt install -y vim python3-pip apache2 libapache2-mod-wsgi-py3 postgresql postgresql-contrib postgresql-server-dev-9.6 openssh-server sudo curl iproute
RUN pip3 install --upgrade pip
RUN pip3 install Django django-mathfilters lxml psycopg2-binary requests
ADD install.sh /
ENTRYPOINT ["/bin/bash", "-x", "install.sh", "arg1"]
RUN apt install -y vim python3-pip apache2 libapache2-mod-wsgi-py3 postgresql postgresql-contrib postgresql-server-dev-9.6 openssh-server sudo curl iproute
RUN pip3 install --upgrade pip
RUN pip3 install Django django-mathfilters lxml psycopg2-binary requests
ADD install.sh /
ENTRYPOINT ["/bin/bash", "-x", "install.sh", "arg1"]
Subscribe to:
Posts (Atom)