WARNING: Erlang considers nodes as completely-friends or completely-non-friends. When you give access to a node you give FULL access to it. This access includes executing any command, and even closing the node completely. If you are connecting to a production node don't use any command if you are not completely sure about the effects.
-
Cookie
-
Set the same cookie on all the nodes. There are different ways to set the cookie in a node:
-
When starting Erlang, use this argument in the command line:
erl -setcookie SFEWRG34AFDSGAFG35235 -name nodex
-
Store the cookie in the file $HOME/.erlang.cookie:
SFEWRG34AFDSGAFG35235
-
Or, if you started Erlang with option -sname or -name, you can change the cookie in the erl console:
erlang:set_cookie(node(), 'SFEWRG34AFDSGAFG35235').
-
-
Check that all the Erlang nodes use the same cookie. In the Erlang console:
erlang:get_cookie().
-
-
Nodes names
-
Set a different node name for every node when you start them with command line argument:
erl -name node1
-
Once a node is started, you can check the node name in the erlang console:
node(). 'node1@machine1.example.org'
-
-
Hosts file
-
Create a file named $HOME/.hosts.erlang (alternatively, it can be placed in code:root_dir().). In machine1 it contains:
'machine2.example.com'.
-
Check if it is correctly loaded:
net_adm:host_file().
-
-
Interconnect the Erlang nodes
-
Start all the nodes and run this in the first one:
net_adm:world(). ['node2@machine2.example.com']
-
Check what nodes is this node connected to:
nodes(). ['node2@machine2.example.com']
-
You can manually try to connect to other nodes. If the connection is successfull, answer will be 'pong'. If unsuccessful, 'pang':
net_adm:ping('node2@machine2.example.com'). pong net_adm:ping('node123124124124@machine2.example.com'). pang
-
Make a call to all nodes (including the local one) to retrieve their localtime:
rpc:multicall([node()| nodes()], erlang, localtime, []). {[{{2004,10,24},{13,5,20}},{{2004,10,24},{13,2,54}}],[]}
-
-
Start an erlang shell on a remote node
-
In the erlang shell in node1, press Control+G to enter the jobs control mode.
User switch command -->
-
You can ask for help:
User switch command --> h c [nn] - connect to job i [nn] - interrupt job k [nn] - kill job j - list all jobs s - start local shell r [node] - start remote shell q - quit erlang ? | h - this message -->
-
Show the current jobs:
--> j 1* {shell,start,[]} -->
-
Create a new console in node2 and see how the new job is added:
--> r 'node2@machine2.example.com' --> j 1 {shell,start,[]} 2* {node2@machine2.example.com,shell,start,[]} -->
-
Connect to the new job:
--> c 2 Eshell V5.4 (abort with ^G) (node2@machine2.example.com)1>
-
-
Play with your remote Erlang shell
-
Remark: In the remote Erlang shell you can do anything you can already do in the local Erlang shell. So take care with that you do!
-
Get memory information:
memory(). [{total,2783288}, {processes,380116}, {processes_used,376564}, {system,2403172}, {atom,215197}, {atom_used,197576}, {binary,65020}, {code,1615988}, {ets,110520}]
-
Get the number of Erlang processes running:
erlang:system_info(process_count).
-
Get list of running processes in the Erlang node:
i(). Pid Initial Call Heap Reds Msgs Registered Current Function Stack <0.0.0> otp_ring0:start/2 377 3633 0 init init:loop/1 2 <0.2.0> erlang:apply/2 2584 74621 0 erl_prim_loader erl_prim_loader:loop/3 5 <0.4.0> gen_event:init_it/6 377 277 0 ...
-
Number of vCards published:
mnesia:table_info(vcard, size).
-
Number of roster items in database:
mnesia:table_info(roster, size).
-
Total MUC rooms:
ets:info(muc_online_room, size).
-
Ask for help on shell commands:
help(). ** shell internal commands ** b() -- display all variable bindings e(N) -- repeat the expression in query <N> f() -- forget all variable bindings f(X) -- forget the binding of variable X h() -- history ...
-
-
Close the remote Erlang shell safely
-
Once you finish your work with the remote node Erlang shell, do not forget to close it. Press Control+G again:
User switch command -->
-
Display the list of current jobs. You want to know the number that identifies the remote shell:
--> j 1 {shell,start,[]} 2* {node2@machine2.example.com,shell,start,[]} -->
-
The remote shell is number 2, so now you can kill it:
--> k 2 -->
-
Since no success message is shown, you will want to verify you actually closed the remote shell:
--> j 1 {shell,start,[]} -->
-
There is a quick way to start an Erlang node, connect to another node and start a shell there. Just use the '-remsh' command line option when starting the node:
erl -sname node1 -remsh node2@machine2.example.com Erlang (BEAM) emulator version 5.4.8 [source] [hipe] Eshell V5.4.8 (abort with ^G) (node2@machine2.example.com)1>
Related Links
- Erlang documentation: net_adm
- Erlang documentation: erlang
- Erlang reference manual: Distributed Erlang
- Tutorial: Erlang - Starting a set of cluster nodes
- Tutorial: L'administration d'un environnement Erlang (in french)
broken links
there are broken links on this page
Thanks. Fixed.
Thanks. Fixed.
There is another .erlang.cookie in freebsd
I found another cookie at /usr/local/lib/erlang/lib/ejabberd-2.0.3.
One is in /var/spool/ejabberd/.
Looks like the one in /var/spool/ejabberd/ is using.
May be the one the /usr/local/etc/rc.d/ejabberd starting script is using the other one.
Let's copy the one from /var/spool/jebberd/ to /usr/local/lib/erlang/lib/ejabberd-2.0.3/