Поддержка доменных сокетов

MongoDB имеет встроенную поддержку для подключения через Unix Domain Sockets и открывает сокет при запуске. По умолчанию сокет находится в /tmp/mongodb-<port>.sock.

Чтобы подключиться к файлу сокета, укажите путь в строке подключения MongoDB:

<?php
$m 
= new MongoClient("mongodb:///tmp/mongo-27017.sock");
?>

Для аутентификации в базе данных (как описано выше) с помощью файла сокета, необходимо указать порт 0, чтобы анализатор строки соединения мог определить конец пути сокета. Кроме того, вы можете использовать параметры в конструкторе.

<?php
$m 
= new MongoClient("mongodb://username:password@/tmp/mongo-27017.sock:0/foo");
?>

Список изменений

Версия Описание
1.0.9 Добавлена поддержка доменных сокетов Unix.
add a note add a note

User Contributed Notes 3 notes

up
3
East Ghost Com
11 years ago
Confirming fast performance running Apache and MongoDB (even a replicaSet secondary, with distant primary over WAN) on same box, communicating via unix domain socket.

$Mongo = new \MongoClient(
    'mongodb:///tmp/mongodb-27017.sock'
        , array(
            'replicaSet' => 'rs1'
            , 'timeout' => 300000
        )
);
$Mdb = $Mongo->DB; // create or open DB
$Mdb->setReadPreference( \MongoClient::RP_NEAREST );

"Postgres core developer Bruce Momjian has blogged about this topic. Momjian states, "Unix-domain socket communication is measurably faster." He measured query network performance showing that the local domain socket was 33% faster than using the TCP/IP stack."

http://stackoverflow.com/questions/257433/postgresql-unix-domain-sockets-vs-tcp-sockets

Excerpt: IP sockets over localhost are basically looped back network on-the-wireIP. There is intentionally “no special knowledge” of the fact that the connection is to the same system, so no effort is made to bypass the normal IP stack mechanisms for performance reasons. For example, transmission over TCP will always involve two context switches to get to the remote socket, as you have to switch through the netisr, which occurs following the “loopback” of the packet through the synthetic loopback interface. Likewise, you get all the overhead of ACKs, TCP flow control, encapsulation/decapsulation, etc. Routing will be performed in order to decide if the packets go to the localhost. Large sends will have to be broken down into MTU-size datagrams, which also adds overhead for large writes. It’s really TCP, it just goes over a loopback interface by virtue of a special address, or discovering that the address requested is served locally rather than over an ethernet (etc).

UNIX domain sockets have explicit knowledge that they’re executing on the same system. They avoid the extra context switch through the netisr, and a sending thread will write the stream or datagrams directly into the receiving socket buffer. No checksums are calculated, no headers are inserted, no routing is performed, etc. Because they have access to the remote socket buffer, they can also directly provide feedback to the sender when it is filling, or more importantly, emptying, rather than having the added overhead of explicit acknowledgement and window changes. The one piece of functionality that UNIX domain sockets don’t provide that TCP does is out-of-band data. In practice, this is an issue for almost noone.

http://osnet.cs.binghamton.edu/publications/TR-20070820.pdf

Excerpt: It was hypothesized that pipes would have the highest throughtput due to its limited functionality, since it is half-duplex, but this was not true. For almost all of the data sizes transferred, Unix domain sockets performed better than both TCP sockets and pipes, as can be seen in Figure 1 below. Figure 1 shows the transfer rates for the IPC mechanisms, but it should be noted that they do not represent the speeds obtained by all of the test machines. The transfer rates are consistent across the machines with similar hardware configurations though. On some machines, Unix domain sockets reached transfer rates as high as 1500 MB/s.

http://bhavin.directi.com/unix-domain-sockets-vs-tcp-sockets/
up
2
mike at eastghost dot com
11 years ago
We've enjoyed a 100x - 200x speed boost, just changing from TCP connection to unix domain socket.  Page loads went from 1,400 ms down to 7 ms instantly.
up
1
ere dot maijala at helsinki dot fi
11 years ago
In my case (CentOS 6.4, PHP 5.3.3, MongoDB 2.4.5, PHP Mongo driver 1.4.2, all on the same system under VMWare, mixed updates, inserts and finds) unix domain sockets seem to work more than ten times faster than a TCP/IP connection to localhost.
To Top