kanoi 10 years ago
parent
commit
f6eddbc99e
  1. 7
      AUTHORS
  2. 4
      ChangeLog
  3. 216
      README
  4. 3
      ckpool.conf
  5. 3
      ckproxy.conf
  6. 8
      src/ckpool.c
  7. 2
      src/ckpool.h
  8. 10
      src/connector.c
  9. 22
      src/stratifier.c
  10. 4
      src/uthash.h

7
AUTHORS

@ -0,0 +1,7 @@
Con Kolivas <kernel@kolivas.org>
Core project lead, maintainer, author of ckpool and libckpool.
15qSxP1SQcUX3o4nhkfdbgyoWEFMomJ4rZ
Andrew Smith <kan0i {at} kano-kun [dot] net>
Maintainer and author of ckdb.
1Jjk2LmktEQKnv8r2cZ9MvLiZwZ9gxabKm

4
ChangeLog

@ -0,0 +1,4 @@
See git repository ('git log') for full changelog.
Git repository can be found at:
https://bitbucket.org/ckolivas/ckpool

216
README

@ -0,0 +1,216 @@
CKPOOL + CKDB + libckpool by Con Kolivas and Andrew Smith.
Ultra low overhead massively scaleable multi-process, multi-threaded modular
bitcoin mining pool, proxy, passthrough, library and database interface in c
for Linux.
---
LICENSE:
GNU Public license V3. See included COPYING for details.
---
DESIGN:
Architecture:
- Low level hand coded architecture relying on minimal outside libraries beyond
basic glibc functions for maximum flexibility and minimal overhead that can be
built and deployed on any Linux installation.
- Multiprocess+multithreaded design to scale to massive deployments and
capitalise on modern multicore/multithread CPU designs.
- Minimal memory overhead.
- Utilises ultra reliable unix sockets for communication with dependent
processes.
- Modular code design to streamline further development.
- Standalone library code that can be utilised independently of ckpool.
- Same code can be deployed in many different modes designed to talk to each
other on the same machine, local lan or remote internet locations.
Modes of deployment:
- Comprehensive pooled mining solution with a postgresql database interface.
- Passthrough node(s) that combine connections to a single socket which can
be used to scale to millions of clients and allow the main pool to be isolated
from direct communication with clients.
- Proxy nodes with a database that act as a single client to the upstream pool
while storing full client data of their own.
- Simple proxy without the limitations of hashrate inherent in other proxy
solutions when talking to ckpool.
- Simple pool without a database.
- Library for use by other software.
Features:
- Bitcoind communication to unmodified bitcoind with multiple failover to local
or remote locations.
- Local pool instance worker limited only by operating system resources and
can be made virtually limitless through use of multiple downstream passthrough
nodes.
- Proxy and passthrough modes can set up multiple failover upstream pools.
- Optional share logging.
- Virtually seamless restarts for upgrades through socket handover from exiting
instances to new starting instance.
- Configurable custom coinbase signature.
- Configurable instant starting and minimum difficulty.
- Rapid vardiff adjustment with stable unlimited maximum difficulty handling.
- New work generation on block changes incorporate full bitcoind transaction
set without delay or requiring to send transactionless work to miners thereby
providing the best bitcoin network support and rewarding miners with the most
transaction fees.
- Event driven communication based on communication readiness preventing
slow communicating clients from delaying low latency ones.
- Stratum messaging system to running clients.
- Accurate pool and per client statistics.
- Multiple named instances can be run concurrently on the same machine.
---
BUILDING:
Building ckpool standalone without ckdb has no dependencies outside of the
basic build tools on any linux installation.
sudo apt-get install build-essential
./configure --without-ckdb
make
Building with ckdb requires installation of the postgresql development library.
sudo apt-get install build-essential libpq-dev
./configure
make
Building from git also requires autoconf and automake
sudo apt-get install build-essential libpq-dev autoconf automake
./autogen.sh
./configure
make
Binaries will be built in the src/ subdirectory.
Installation is NOT required and ckpool can be run directly from the directory
it's built in but it can be installed with:
sudo make install
It is anticipated that pool operators wishing to set up a full database based
installation of ckpool+ckdb will be familiar with setting up postgresql and
associated permissions to the directories where the various processes will
communicate with each other and a web server so these will not be documented.
---
RUNNING:
ckpool supports the following options:
-A | --standalone
-c CONFIG | --config CONFIG
-d CKDB-NAME | --ckdb-name CKDB-NAME
-g GROUP | --group GROUP
-H | --handover
-h | --help
-k | --killold
-L | --log-shares
-l LOGLEVEL | --loglevel LOGLEVEL
-n NAME | --name NAME
-P | --passthrough
-p | --proxy
-S CKDB-SOCKDIR | --ckdb-sockdir CKDB-SOCKDIR
-s SOCKDIR | --sockdir SOCKDIR
-A Standalone mode tells ckpool not to try to communicate with ckdb or log any
ckdb requests in the rotating ckdb logs it would otherwise store. All users
are automatically accepted without any attempt to authorise users in any way.
-c <CONFIG> tells ckpool to override its default configuration filename and
load the specified one. If -c is not specified, ckpool looks for ckpool.conf
whereas in proxy or passthrough modes it will look for ckproxy.conf
-d <CKDB-NAME> tells ckpool what the name of the ckdb process is that it should
speak to, otherwise it will look for ckdb.
-g <GROUP> will start ckpool as the group ID specified.
-H will make ckpool attempt to receive a handover from a running incidence of
ckpool with the same name, taking its client listening socket and shutting it
down.
-h displays the above help
-k will make ckpool shut down an existing instance of ckpool with the same name,
killing it if need be. Otherwise ckpool will refuse to start if an instance of
the same name is already running.
-L will log per share information in the logs directory divided by block height
and then workbase.
-l <LOGLEVEL will change the log level to that specified. Default is 5 and
maximum debug is level 7.
-n <NAME> will change the ckpool process name to that specified, allowing
multiple different named instances to be running.
-P will start ckpool in passthrough proxy mode where it collates all incoming
connections and streams all information on a single connection to an upstream
pool specified in ckproxy.conf . Downstream users all retain their individual
presence on the master pool. Standalone mode is implied.
-p will start ckpool in proxy mode where it appears to be a local pool handling
clients as separate entities while presenting shares as a single user to the
upstream pool specified. Note that the upstream pool needs to be a ckpool for
it to scale to large hashrates. Standalone mode is Optional.
-S <CKDB-SOCKDIR> tells ckpool which directory to look for the ckdb socket to
talk to.
-s <SOCKDIR> tells ckpool which directory to place its own communication
sockets (/tmp by default)
---
CONFIGURATION
At least one bitcoind is mandatory in ckpool mode with the minimum requirements
of server, rpcuser and rpcpassword set.
Ckpool takes a json encoded configuration file in ckpool.conf by default or
ckproxy.conf in proxy or passthrough mode unless specified with -c. Sample
configurations for ckpool and ckproxy are included with the source. Entries
after the valid json are ignored and the space there can be used for comments.
The options recognised are as follows:
"btcd" : This is an array of bitcoind(s) with the options url, auth and pass
which match the configured bitcoind. This is mandatory in pool mode.
"proxy" : This is an array in the same format as btcd above but is used in
proxy and passthrough mode to set the upstream pool and is mandatory.
"btcaddress" : This is the bitcoin address to try to generate blocks to.
"btcsig" : This is an optional signature to put into the coinbase of mined
blocks.
"blockpoll" : This is the frequency in milliseconds for how often to check for
new network blocks and is 500 by default.
"update_interval" : This is the frequency that stratum updates are sent out to
miners and is set to 30 seconds by default to help perpetuate transactions for
the health of the bitcoin network.
"serverurl" : This is the IP to try to bind ckpool uniquely to, otherwise it
will attempt to bind to all interfaces in port 3333 by default in pool mode
and 3334 in proxy mode.
"mindiff" : Minimum diff that vardiff will allow miners to drop to. Default 1
"startdiff" : Starting diff that new clients are given. Default 42
"logdir" : Which directory to store pool and client logs. Default "logs"

3
ckpool.conf

@ -17,6 +17,7 @@
"update_interval" : 30, "update_interval" : 30,
"serverurl" : "ckpool.org:3333", "serverurl" : "ckpool.org:3333",
"mindiff" : 1, "mindiff" : 1,
"startdiff" : 1, "startdiff" : 42,
"logdir" : "logs" "logdir" : "logs"
} }
Comments from here on are ignored.

3
ckproxy.conf

@ -14,6 +14,7 @@
"update_interval" : 30, "update_interval" : 30,
"serverurl" : "192.168.1.100:3334", "serverurl" : "192.168.1.100:3334",
"mindiff" : 1, "mindiff" : 1,
"startdiff" : 1, "startdiff" : 42,
"logdir" : "logs" "logdir" : "logs"
} }
Comments from here on are ignored.

8
src/ckpool.c

@ -1015,6 +1015,7 @@ static struct option long_options[] = {
{"handover", no_argument, 0, 'H'}, {"handover", no_argument, 0, 'H'},
{"help", no_argument, 0, 'h'}, {"help", no_argument, 0, 'h'},
{"killold", no_argument, 0, 'k'}, {"killold", no_argument, 0, 'k'},
{"log-shares", no_argument, 0, 'L'},
{"loglevel", required_argument, 0, 'l'}, {"loglevel", required_argument, 0, 'l'},
{"name", required_argument, 0, 'n'}, {"name", required_argument, 0, 'n'},
{"passthrough", no_argument, 0, 'P'}, {"passthrough", no_argument, 0, 'P'},
@ -1042,7 +1043,7 @@ int main(int argc, char **argv)
ckp.initial_args[ckp.args] = strdup(argv[ckp.args]); ckp.initial_args[ckp.args] = strdup(argv[ckp.args]);
ckp.initial_args[ckp.args] = NULL; ckp.initial_args[ckp.args] = NULL;
while ((c = getopt_long(argc, argv, "Ac:d:g:Hhkl:n:PpS:s:", long_options, &i)) != -1) { while ((c = getopt_long(argc, argv, "Ac:d:g:HhkLl:n:PpS:s:", long_options, &i)) != -1) {
switch (c) { switch (c) {
case 'A': case 'A':
ckp.standalone = true; ckp.standalone = true;
@ -1080,6 +1081,9 @@ int main(int argc, char **argv)
case 'k': case 'k':
ckp.killold = true; ckp.killold = true;
break; break;
case 'L':
ckp.logshares = true;
break;
case 'l': case 'l':
ckp.loglevel = atoi(optarg); ckp.loglevel = atoi(optarg);
if (ckp.loglevel < LOG_EMERG || ckp.loglevel > LOG_DEBUG) { if (ckp.loglevel < LOG_EMERG || ckp.loglevel > LOG_DEBUG) {
@ -1093,7 +1097,7 @@ int main(int argc, char **argv)
case 'P': case 'P':
if (ckp.proxy) if (ckp.proxy)
quit(1, "Cannot set both proxy and passthrough mode"); quit(1, "Cannot set both proxy and passthrough mode");
ckp.proxy = ckp.passthrough = true; ckp.standalone = ckp.proxy = ckp.passthrough = true;
break; break;
case 'p': case 'p':
if (ckp.passthrough) if (ckp.passthrough)

2
src/ckpool.h

@ -87,6 +87,8 @@ struct ckpool_instance {
char *config; char *config;
/* Kill old instance with same name */ /* Kill old instance with same name */
bool killold; bool killold;
/* Whether to log shares or not */
bool logshares;
/* Logging level */ /* Logging level */
int loglevel; int loglevel;
/* Main process name */ /* Main process name */

10
src/connector.c

@ -136,7 +136,7 @@ retry:
ck_wlock(&ci->lock); ck_wlock(&ci->lock);
client->id = client_id++; client->id = client_id++;
HASH_ADD_INT(clients, id, client); HASH_ADD_I64(clients, id, client);
HASH_REPLACE(fdhh, fdclients, fd, SOI, client, old_client); HASH_REPLACE(fdhh, fdclients, fd, SOI, client, old_client);
ci->nfds++; ci->nfds++;
ck_wunlock(&ci->lock); ck_wunlock(&ci->lock);
@ -441,16 +441,16 @@ static void send_client(conn_instance_t *ci, int64_t id, char *buf)
} }
ck_rlock(&ci->lock); ck_rlock(&ci->lock);
HASH_FIND_INT(clients, &id, client); HASH_FIND_I64(clients, &id, client);
if (likely(client)) if (likely(client))
fd = client->fd; fd = client->fd;
ck_runlock(&ci->lock); ck_runlock(&ci->lock);
if (unlikely(fd == -1)) { if (unlikely(fd == -1)) {
if (client) if (client)
LOGINFO("Client id %d disconnected", id); LOGINFO("Client id %ld disconnected", id);
else else
LOGINFO("Connector failed to find client id %d to send to", id); LOGINFO("Connector failed to find client id %ld to send to", id);
free(buf); free(buf);
return; return;
} }
@ -471,7 +471,7 @@ static client_instance_t *client_by_id(conn_instance_t *ci, int64_t id)
client_instance_t *client; client_instance_t *client;
ck_rlock(&ci->lock); ck_rlock(&ci->lock);
HASH_FIND_INT(clients, &id, client); HASH_FIND_I64(clients, &id, client);
ck_runlock(&ci->lock); ck_runlock(&ci->lock);
return client; return client;

22
src/stratifier.c

@ -530,13 +530,14 @@ static void add_base(ckpool_t *ckp, workbase_t *wb, bool *new_block)
memcpy(lasthash, wb->prevhash, 65); memcpy(lasthash, wb->prevhash, 65);
blockchange_id = wb->id; blockchange_id = wb->id;
} }
if (*new_block) { if (*new_block && ckp->logshares) {
sprintf(wb->logdir, "%s%08x/", ckp->logdir, wb->height); sprintf(wb->logdir, "%s%08x/", ckp->logdir, wb->height);
ret = mkdir(wb->logdir, 0750); ret = mkdir(wb->logdir, 0750);
if (unlikely(ret && errno != EEXIST)) if (unlikely(ret && errno != EEXIST))
LOGERR("Failed to create log directory %s", wb->logdir); LOGERR("Failed to create log directory %s", wb->logdir);
} }
sprintf(wb->idstring, "%016lx", wb->id); sprintf(wb->idstring, "%016lx", wb->id);
if (ckp->logshares)
sprintf(wb->logdir, "%s%08x/%s", ckp->logdir, wb->height, wb->idstring); sprintf(wb->logdir, "%s%08x/%s", ckp->logdir, wb->height, wb->idstring);
HASH_ITER(hh, workbases, tmp, tmpa) { HASH_ITER(hh, workbases, tmp, tmpa) {
@ -549,7 +550,7 @@ static void add_base(ckpool_t *ckp, workbase_t *wb, bool *new_block)
break; break;
} }
} }
HASH_ADD_INT(workbases, id, wb); HASH_ADD_I64(workbases, id, wb);
current_workbase = wb; current_workbase = wb;
ck_wunlock(&workbase_lock); ck_wunlock(&workbase_lock);
@ -797,7 +798,7 @@ static stratum_instance_t *__instance_by_id(int64_t id)
{ {
stratum_instance_t *instance; stratum_instance_t *instance;
HASH_FIND_INT(stratum_instances, &id, instance); HASH_FIND_I64(stratum_instances, &id, instance);
return instance; return instance;
} }
@ -811,7 +812,7 @@ static stratum_instance_t *__stratum_add_instance(ckpool_t *ckp, int64_t id)
instance->ckp = ckp; instance->ckp = ckp;
tv_time(&instance->ldc); tv_time(&instance->ldc);
LOGINFO("Added instance %d", id); LOGINFO("Added instance %d", id);
HASH_ADD_INT(stratum_instances, id, instance); HASH_ADD_I64(stratum_instances, id, instance);
return instance; return instance;
} }
@ -1213,10 +1214,10 @@ static json_t *parse_subscribe(int64_t client_id, json_t *params_val)
if (!old_match) { if (!old_match) {
/* Create a new extranonce1 based on a uint64_t pointer */ /* Create a new extranonce1 based on a uint64_t pointer */
new_enonce1(client); new_enonce1(client);
LOGINFO("Set new subscription %d to new enonce1 %s", client->id, LOGINFO("Set new subscription %ld to new enonce1 %s", client->id,
client->enonce1); client->enonce1);
} else { } else {
LOGINFO("Set new subscription %d to old matched enonce1 %s", client->id, LOGINFO("Set new subscription %ld to old matched enonce1 %s", client->id,
client->enonce1); client->enonce1);
} }
@ -1352,7 +1353,7 @@ static json_t *parse_authorise(stratum_instance_t *client, json_t *params_val, j
client->start_time = now.tv_sec; client->start_time = now.tv_sec;
strcpy(client->address, address); strcpy(client->address, address);
LOGNOTICE("Authorised client %d worker %s as user %s", client->id, buf, LOGNOTICE("Authorised client %ld worker %s as user %s", client->id, buf,
client->user_instance->username); client->user_instance->username);
client->workername = strdup(buf); client->workername = strdup(buf);
if (client->ckp->standalone) if (client->ckp->standalone)
@ -1680,9 +1681,9 @@ static json_t *parse_submit(stratum_instance_t *client, json_t *json_msg,
char hexhash[68] = {}, sharehash[32], cdfield[64]; char hexhash[68] = {}, sharehash[32], cdfield[64];
enum share_err err = SE_NONE; enum share_err err = SE_NONE;
ckpool_t *ckp = client->ckp; ckpool_t *ckp = client->ckp;
char *fname = NULL, *s;
char idstring[20]; char idstring[20];
uint32_t ntime32; uint32_t ntime32;
char *fname, *s;
workbase_t *wb; workbase_t *wb;
uchar hash[32]; uchar hash[32];
int64_t id; int64_t id;
@ -1745,7 +1746,7 @@ static json_t *parse_submit(stratum_instance_t *client, json_t *json_msg,
share = true; share = true;
ck_rlock(&workbase_lock); ck_rlock(&workbase_lock);
HASH_FIND_INT(workbases, &id, wb); HASH_FIND_I64(workbases, &id, wb);
if (unlikely(!wb)) { if (unlikely(!wb)) {
err = SE_INVALID_JOBID; err = SE_INVALID_JOBID;
json_set_string(json_msg, "reject-reason", SHARE_ERR(err)); json_set_string(json_msg, "reject-reason", SHARE_ERR(err));
@ -1836,6 +1837,7 @@ out_unlock:
json_set_string(val, "workername", client->workername); json_set_string(val, "workername", client->workername);
json_set_string(val, "username", client->user_instance->username); json_set_string(val, "username", client->user_instance->username);
if (ckp->logshares) {
fp = fopen(fname, "a"); fp = fopen(fname, "a");
if (likely(fp)) { if (likely(fp)) {
s = json_dumps(val, 0); s = json_dumps(val, 0);
@ -1847,6 +1849,7 @@ out_unlock:
LOGERR("Failed to fwrite to %s", fname); LOGERR("Failed to fwrite to %s", fname);
} else } else
LOGERR("Failed to fopen %s", fname); LOGERR("Failed to fopen %s", fname);
}
ckdbq_add(ckp, ID_SHARES, val); ckdbq_add(ckp, ID_SHARES, val);
out: out:
if (!share) { if (!share) {
@ -1866,6 +1869,7 @@ out:
ckdbq_add(ckp, ID_SHAREERR, val); ckdbq_add(ckp, ID_SHAREERR, val);
LOGINFO("Invalid share from client %d: %s", client->id, client->workername); LOGINFO("Invalid share from client %d: %s", client->id, client->workername);
} }
free(fname);
return json_boolean(result); return json_boolean(result);
} }

4
src/uthash.h

@ -261,6 +261,10 @@ do {
HASH_ADD(hh,head,intfield,sizeof(int),add) HASH_ADD(hh,head,intfield,sizeof(int),add)
#define HASH_REPLACE_INT(head,intfield,add,replaced) \ #define HASH_REPLACE_INT(head,intfield,add,replaced) \
HASH_REPLACE(hh,head,intfield,sizeof(int),add,replaced) HASH_REPLACE(hh,head,intfield,sizeof(int),add,replaced)
#define HASH_FIND_I64(head,findint,out) \
HASH_FIND(hh,head,findint,sizeof(int64_t),out)
#define HASH_ADD_I64(head,intfield,add) \
HASH_ADD(hh,head,intfield,sizeof(int64_t),add)
#define HASH_FIND_PTR(head,findptr,out) \ #define HASH_FIND_PTR(head,findptr,out) \
HASH_FIND(hh,head,findptr,sizeof(void *),out) HASH_FIND(hh,head,findptr,sizeof(void *),out)
#define HASH_ADD_PTR(head,ptrfield,add) \ #define HASH_ADD_PTR(head,ptrfield,add) \

Loading…
Cancel
Save