ApiUrl: '//example.net/'
to match your domain name.
+
+ npm install -g ember-cli@2.4.3
+ npm install -g bower
+ npm install
+ bower install
+ ./build.sh
+
+Configure nginx to serve API on /api
subdirectory.
+Configure your nginx instance to serve www/dist
as static website.
+
+#### Serving API using nginx
+
+Create an upstream for API:
+
+ upstream api {
+ server 127.0.0.1:8080;
+ }
+
+and add this setting after location /
:
+
+ location /api {
+ proxy_pass http://api;
+ }
+
+You can customize layout and other stuff using built-in web server with live reload:
+
+ ember server --port 8082 --environment development
+
+#### Customization
+
+Check out www/app/templates
directory and edit these templates
+in order to add your own branding and contacts.
+
+### Configuration
+
+Configuration is actually simple, just read it twice and think twice before changing defaults.
+
+**Don't copy config directly from this manual, use example config from package,
+Otherwise you will get errors on start because JSON can't contain comments actually.**
+
+```javascript
+{
+ // Set to a number of CPU threads of your server
+ "threads": 2,
+ // Prefix for keys in redis store
+ "coin": "eth",
+ // Give unique name to each instance
+ "name": "main",
+
+ "proxy": {
+ "enabled": true,
+
+ // Bind mining endpoint to this IP:PORT
+ "listen": "0.0.0.0:8546",
+
+ // Allow only this header and body size of HTTP request from miners
+ "limitHeadersSize": 1024,
+ "limitBodySize": 256,
+
+ /* Use it if you are behind CloudFlare (bad idea) or behind http-reverse
+ proxy to enable IP detection from X-Forwarded-For header.
+ Advanced users only. It's tricky to make it right and secure.
+ */
+ "behindReverseProxy": false,
+
+ // Try to get new job from geth in this interval
+ "blockRefreshInterval": "120ms",
+ "stateUpdateInterval": "3s",
+ // Require this share difficulty from miners
+ "difficulty": 2000000000,
+
+ "hashrateExpiration": "30m",
+
+ /* Reply error to miner instead of job if redis is unavailable.
+ Should save electricity to miners if pool is sick and they didn't set up failovers.
+ */
+ "healthCheck": true,
+ // Mark pool sick after this number of redis failures.
+ "maxFails": 100,
+
+ "policy": {
+ "workers": 8,
+ "resetInterval": "60m",
+ "refreshInterval": "1m",
+
+ "banning": {
+ "enabled": false,
+ /* Name of ipset for banning.
+ Check http://ipset.netfilter.org/ documentation.
+ */
+ "ipset": "blacklist",
+ // Remove ban after this amount of time
+ "timeout": 1800,
+ // Percent of invalid shares from all shares to ban miner
+ "invalidPercent": 30,
+ // Check after after miner submitted this number of shares
+ "checkThreshold": 30,
+ // Bad miner after this number of malformed requests
+ "malformedLimit": 5
+ }
+ }
+ },
+
+ // Provides JSON data for frontend which is static website
+ "api": {
+ "enabled": true,
+
+ /* If you are running API node in a distributed environment where this module
+ is reading data from redis writeable slave, enable this option.
+ Only redis writeable slave will work properly if you are distributing using redis slaves.
+ Don't distribute!
+ */
+ "purgeOnly": false,
+ "listen": "0.0.0.0:8080",
+ // Collect miners stats (hashrate, ...) in this interval
+ "statsCollectInterval": "5s",
+
+ // Fast hashrate estimation window for each miner from it's shares
+ "hashrateWindow": "30m",
+ // Long and precise hashrate from shares, 3h is cool, keep it
+ "hashrateLargeWindow": "3h",
+ // Max number of payments to display in frontend
+ "payments": 50,
+ // Max numbers of blocks to display in frontend
+ "blocks": 50
+ },
+
+ // Check health of each geth node in this interval
+ "upstreamCheckInterval": "5s",
+
+ /* List of geth nodes to poll for new jobs. Pool will try to get work from
+ first alive one and check in background for failed to back up.
+ Current block template of the pool is always cached in RAM indeed.
+ */
+ "upstream": [
+ {
+ "name": "main",
+ "url": "http://127.0.0.1:8545",
+ "timeout": "10s"
+ },
+ {
+ "name": "backup",
+ "url": "http://127.0.0.2:8545",
+ "timeout": "10s"
+ }
+ ],
+
+ // This is standard redis connection options
+ "redis": {
+ // Where your redis instance is listening for commands
+ "endpoint": "127.0.0.1:6379",
+ "poolSize": 8,
+ "database": 0,
+ /* Generate and specify very strong password for in redis
+ configuration file and specify it here.
+ This is done using the requirepass directive in the configuration file.
+ */
+ "password": "secret"
+ },
+
+ // This module periodically credits coins to miners
+ "unlocker": {
+ "enabled": false,
+ // Pool fee percentage
+ "poolFee": 1.0,
+ // Unlock only if this number of blocks mined back
+ "depth": 120,
+ // Simply don't touch this option
+ "immatureDepth": 20,
+ // Run unlocker in this interval
+ "interval": "10m",
+ // Geth instance node rpc endpoint for unlocking blocks
+ "daemon": "http://127.0.0.1:8545",
+ // Rise error if can't reach geth in this amount of time
+ "timeout": "10s"
+ },
+
+ // Paying out miners using this module
+ "payouts": {
+ "enabled": false,
+ // Run payouts in this interval
+ "interval": "12h",
+ // Geth instance node rpc endpoint for payouts processing
+ "daemon": "http://127.0.0.1:8545",
+ // Rise error if can't reach geth in this amount of time
+ "timeout": "10s",
+ // Address with pool balance
+ "address": "0x0",
+ // Gas amount and price for payout tx
+ "gas": "21000",
+ "gasPrice": "50000000000",
+ // Seend payment only if miner's balance is >= 0.5 Ether
+ "threshold": 500000000
+ },
+}
+```
+
+If you are distributing your pool deployment to several servers or processes,
+create several configs and disable unneeded modules on each server.
+This is very advanced, better don't distribute to several servers until you really need it.
+
+I recommend this deployment strategy:
+
+* Mining instance - 1x (it depends, you can run one node for EU, one for US, one for Asia)
+* Unlocker and payouts instance - 1x each (strict!)
+* API instance - 1x
+
+### Notes
+
+Unlocking and payouts are sequential, 1st tx go, 2nd waiting for 1st to confirm and so on.
+You can disable that in code. Also, keep in mind that unlocking and payouts will be stopped in case of any backend or geth failure.
+You must restart module if you see such errors with *suspended* word, so I recommend to run unlocker and payouts in a separate processes.
+Don't run payouts and unlocker as part of mining node.
+
+### Credits
+
+Made by sammy007.
+
+### Donations
+
+* **ETH**: [0xb85150eb365e7df0941f0cf08235f987ba91506a](https://etherchain.org/account/0xb85150eb365e7df0941f0cf08235f987ba91506a)
+
+* **BTC**: [1PYqZATFuYAKS65dbzrGhkrvoN9au7WBj8](https://blockchain.info/address/1PYqZATFuYAKS65dbzrGhkrvoN9au7WBj8)
diff --git a/api/server.go b/api/server.go
new file mode 100644
index 0000000..94eb06c
--- /dev/null
+++ b/api/server.go
@@ -0,0 +1,277 @@
+package api
+
+import (
+ "encoding/json"
+ "github.com/gorilla/mux"
+ "log"
+ "net/http"
+ "sync"
+ "sync/atomic"
+ "time"
+
+ "../storage"
+ "../util"
+)
+
+type ApiConfig struct {
+ Enabled bool `json:"enabled"`
+ Listen string `json:"listen"`
+ StatsCollectInterval string `json:"statsCollectInterval"`
+ HashrateWindow string `json:"hashrateWindow"`
+ HashrateLargeWindow string `json:"hashrateLargeWindow"`
+ Payments int64 `json:"payments"`
+ Blocks int64 `json:"blocks"`
+ PurgeOnly bool `json:"purgeOnly"`
+}
+
+type ApiServer struct {
+ config *ApiConfig
+ backend *storage.RedisClient
+ hashrateWindow time.Duration
+ hashrateLargeWindow time.Duration
+ stats atomic.Value
+ miners map[string]*Entry
+ minersMu sync.RWMutex
+}
+
+type Entry struct {
+ stats map[string]interface{}
+ updatedAt int64
+}
+
+func NewApiServer(cfg *ApiConfig, backend *storage.RedisClient) *ApiServer {
+ hashrateWindow, _ := time.ParseDuration(cfg.HashrateWindow)
+ hashrateLargeWindow, _ := time.ParseDuration(cfg.HashrateLargeWindow)
+ return &ApiServer{
+ config: cfg,
+ backend: backend,
+ hashrateWindow: hashrateWindow,
+ hashrateLargeWindow: hashrateLargeWindow,
+ miners: make(map[string]*Entry),
+ }
+}
+
+func (s *ApiServer) Start() {
+ if s.config.PurgeOnly {
+ log.Printf("Starting API in purge-only mode")
+ } else {
+ log.Printf("Starting API on %v", s.config.Listen)
+ }
+
+ statsIntv, _ := time.ParseDuration(s.config.StatsCollectInterval)
+ statsTimer := time.NewTimer(statsIntv)
+ log.Printf("Set stats collect interval to %v", statsIntv)
+
+ // Running only to flush stale data
+ if s.config.PurgeOnly {
+ s.purgeStale()
+ } else {
+ // Immediately collect stats
+ s.collectStats()
+ }
+
+ go func() {
+ for {
+ select {
+ case <-statsTimer.C:
+ if s.config.PurgeOnly {
+ s.purgeStale()
+ } else {
+ s.collectStats()
+ }
+ statsTimer.Reset(statsIntv)
+ }
+ }
+ }()
+
+ if !s.config.PurgeOnly {
+ s.listen()
+ }
+}
+
+func (s *ApiServer) listen() {
+ r := mux.NewRouter()
+ r.HandleFunc("/api/stats", s.StatsIndex)
+ r.HandleFunc("/api/miners", s.MinersIndex)
+ r.HandleFunc("/api/blocks", s.BlocksIndex)
+ r.HandleFunc("/api/payments", s.PaymentsIndex)
+ r.HandleFunc("/api/accounts/{login:0x[0-9a-f]{40}}", s.AccountIndex)
+ r.NotFoundHandler = http.HandlerFunc(notFound)
+ err := http.ListenAndServe(s.config.Listen, r)
+ if err != nil {
+ log.Fatalf("Failed to start API: %v", err)
+ }
+}
+
+func notFound(w http.ResponseWriter, r *http.Request) {
+ w.Header().Set("Content-Type", "application/json; charset=UTF-8")
+ w.Header().Set("Access-Control-Allow-Origin", "*")
+ w.Header().Set("Cache-Control", "no-cache")
+ w.WriteHeader(http.StatusNotFound)
+}
+
+func (s *ApiServer) purgeStale() {
+ total, err := s.backend.FlushStaleStats(s.hashrateLargeWindow)
+ if err != nil {
+ log.Printf("Failed to purge stale data from backend: ", err)
+ } else {
+ log.Printf("Purged stale stats from backend, %v shares affected", total)
+ }
+}
+
+func (s *ApiServer) collectStats() {
+ now := util.MakeTimestamp()
+ stats, err := s.backend.CollectStats(s.hashrateWindow, s.config.Blocks, s.config.Payments)
+ if err != nil {
+ log.Printf("Failed to fetch stats from backend: %v", err)
+ } else {
+ log.Printf("Stats collection finished %v", util.MakeTimestamp()-now)
+ s.stats.Store(stats)
+ }
+}
+
+func (s *ApiServer) StatsIndex(w http.ResponseWriter, r *http.Request) {
+ w.Header().Set("Content-Type", "application/json; charset=UTF-8")
+ w.Header().Set("Access-Control-Allow-Origin", "*")
+ w.Header().Set("Cache-Control", "no-cache")
+ w.WriteHeader(http.StatusOK)
+
+ reply := make(map[string]interface{})
+ nodes, err := s.backend.GetNodeStates()
+ if err != nil {
+ log.Printf("Failed to get nodes stats from backend: %v", err)
+ }
+ reply["nodes"] = nodes
+
+ stats := s.getStats()
+ if stats != nil {
+ reply["now"] = util.MakeTimestamp()
+ reply["stats"] = stats["stats"]
+ reply["hashrate"] = stats["hashrate"]
+ reply["minersTotal"] = stats["minersTotal"]
+ reply["maturedTotal"] = stats["maturedTotal"]
+ reply["immatureTotal"] = stats["immatureTotal"]
+ reply["candidatesTotal"] = stats["candidatesTotal"]
+ }
+
+ err = json.NewEncoder(w).Encode(reply)
+ if err != nil {
+ log.Println("Error serializing API response: ", err)
+ }
+}
+
+func (s *ApiServer) MinersIndex(w http.ResponseWriter, r *http.Request) {
+ w.Header().Set("Content-Type", "application/json; charset=UTF-8")
+ w.Header().Set("Access-Control-Allow-Origin", "*")
+ w.Header().Set("Cache-Control", "no-cache")
+ w.WriteHeader(http.StatusOK)
+
+ reply := make(map[string]interface{})
+ stats := s.getStats()
+ if stats != nil {
+ reply["now"] = util.MakeTimestamp()
+ reply["miners"] = stats["miners"]
+ reply["hashrate"] = stats["hashrate"]
+ reply["minersTotal"] = stats["minersTotal"]
+ }
+
+ err := json.NewEncoder(w).Encode(reply)
+ if err != nil {
+ log.Println("Error serializing API response: ", err)
+ }
+}
+
+func (s *ApiServer) BlocksIndex(w http.ResponseWriter, r *http.Request) {
+ w.Header().Set("Content-Type", "application/json; charset=UTF-8")
+ w.Header().Set("Access-Control-Allow-Origin", "*")
+ w.Header().Set("Cache-Control", "no-cache")
+ w.WriteHeader(http.StatusOK)
+
+ reply := make(map[string]interface{})
+ stats := s.getStats()
+ if stats != nil {
+ reply["matured"] = stats["matured"]
+ reply["maturedTotal"] = stats["maturedTotal"]
+ reply["immature"] = stats["immature"]
+ reply["immatureTotal"] = stats["immatureTotal"]
+ reply["candidates"] = stats["candidates"]
+ reply["candidatesTotal"] = stats["candidatesTotal"]
+ }
+
+ err := json.NewEncoder(w).Encode(reply)
+ if err != nil {
+ log.Println("Error serializing API response: ", err)
+ }
+}
+
+func (s *ApiServer) PaymentsIndex(w http.ResponseWriter, r *http.Request) {
+ w.Header().Set("Content-Type", "application/json; charset=UTF-8")
+ w.Header().Set("Access-Control-Allow-Origin", "*")
+ w.Header().Set("Cache-Control", "no-cache")
+ w.WriteHeader(http.StatusOK)
+
+ reply := make(map[string]interface{})
+ stats := s.getStats()
+ if stats != nil {
+ reply["payments"] = stats["payments"]
+ reply["paymentsTotal"] = stats["paymentsTotal"]
+ }
+
+ err := json.NewEncoder(w).Encode(reply)
+ if err != nil {
+ log.Println("Error serializing API response: ", err)
+ }
+}
+
+func (s *ApiServer) AccountIndex(w http.ResponseWriter, r *http.Request) {
+ w.Header().Set("Content-Type", "application/json; charset=UTF-8")
+ w.Header().Set("Access-Control-Allow-Origin", "*")
+ w.Header().Set("Cache-Control", "no-cache")
+
+ login := mux.Vars(r)["login"]
+ reply, err := s.backend.GetMinerStats(login, s.config.Payments)
+ if err != nil {
+ w.WriteHeader(http.StatusInternalServerError)
+ log.Printf("Failed to fetch stats from backend: %v", err)
+ return
+ }
+
+ s.minersMu.Lock()
+ defer s.minersMu.Unlock()
+
+ entry, ok := s.miners[login]
+ now := util.MakeTimestamp()
+ // Refresh stats if stale
+ if !ok || entry.updatedAt < now-5000 {
+ stats, err := s.backend.CollectWorkersStats(s.hashrateWindow, s.hashrateLargeWindow, login)
+ if err != nil {
+ w.WriteHeader(http.StatusInternalServerError)
+ log.Printf("Failed to fetch stats from backend: %v", err)
+ return
+ }
+ entry = &Entry{stats: stats, updatedAt: now}
+ s.miners[login] = entry
+ }
+
+ reply["workers"] = entry.stats["workers"]
+ reply["workersTotal"] = entry.stats["workersTotal"]
+ reply["workersOnline"] = entry.stats["workersOnline"]
+ reply["workersOffline"] = entry.stats["workersOffline"]
+ reply["hashrate"] = entry.stats["hashrate"]
+ reply["currentHashrate"] = entry.stats["currentHashrate"]
+ reply["pageSize"] = s.config.Payments
+
+ w.WriteHeader(http.StatusOK)
+ err = json.NewEncoder(w).Encode(reply)
+ if err != nil {
+ log.Println("Error serializing API response: ", err)
+ }
+}
+
+func (s *ApiServer) getStats() map[string]interface{} {
+ stats := s.stats.Load()
+ if stats != nil {
+ return stats.(map[string]interface{})
+ }
+ return nil
+}
diff --git a/build.sh b/build.sh
new file mode 100755
index 0000000..bf3b63b
--- /dev/null
+++ b/build.sh
@@ -0,0 +1,3 @@
+#!/bin/bash
+
+go build -o ether-pool main.go
diff --git a/config.example.json b/config.example.json
new file mode 100644
index 0000000..9aa985d
--- /dev/null
+++ b/config.example.json
@@ -0,0 +1,93 @@
+{
+ "threads": 2,
+ "coin": "eth",
+ "name": "main",
+
+ "proxy": {
+ "enabled": true,
+ "listen": "0.0.0.0:8546",
+ "limitHeadersSize": 1024,
+ "limitBodySize": 256,
+ "behindReverseProxy": false,
+ "blockRefreshInterval": "120ms",
+ "stateUpdateInterval": "3s",
+ "difficulty": 2000000000,
+ "hashrateExpiration": "30m",
+
+ "healthCheck": true,
+ "maxFails": 100,
+
+ "policy": {
+ "workers": 8,
+ "resetInterval": "60m",
+ "refreshInterval": "1m",
+
+ "banning": {
+ "enabled": true,
+ "ipset": "blacklist",
+ "timeout": 1800,
+ "invalidPercent": 30,
+ "checkThreshold": 30,
+ "malformedLimit": 5
+ }
+ }
+ },
+
+ "api": {
+ "enabled": true,
+ "purgeOnly": false,
+ "listen": "0.0.0.0:8080",
+ "statsCollectInterval": "5s",
+ "hashrateWindow": "30m",
+ "hashrateLargeWindow": "3h",
+ "payments": 30,
+ "blocks": 50
+ },
+
+ "upstreamCheckInterval": "5s",
+ "upstream": [
+ {
+ "name": "main",
+ "url": "http://127.0.0.1:8545",
+ "timeout": "10s"
+ },
+ {
+ "name": "backup",
+ "url": "http://127.0.0.2:8545",
+ "timeout": "10s"
+ }
+ ],
+
+ "redis": {
+ "endpoint": "127.0.0.1:6379",
+ "poolSize": 5,
+ "database": 0,
+ "password": "secret"
+ },
+
+ "unlocker": {
+ "enabled": false,
+ "poolFee": 1.0,
+ "depth": 120,
+ "immatureDepth": 20,
+ "interval": "10m",
+ "daemon": "http://127.0.0.1:8545",
+ "timeout": "10s"
+ },
+
+ "payouts": {
+ "enabled": false,
+ "interval": "120m",
+ "daemon": "http://127.0.0.1:8545",
+ "timeout": "10s",
+ "address": "0x0",
+ "gas": "21000",
+ "gasPrice": "50000000000",
+ "threshold": 500000000
+ },
+
+ "newrelicEnabled": false,
+ "newrelicName": "MyEtherProxy",
+ "newrelicKey": "SECRET_KEY",
+ "newrelicVerbose": false
+}
diff --git a/main.go b/main.go
new file mode 100644
index 0000000..7801c72
--- /dev/null
+++ b/main.go
@@ -0,0 +1,102 @@
+package main
+
+import (
+ "encoding/json"
+ "log"
+ "os"
+ "path/filepath"
+ "runtime"
+
+ "./api"
+ "./payouts"
+ "./proxy"
+ "./storage"
+
+ "github.com/yvasiyarov/gorelic"
+)
+
+var cfg proxy.Config
+var backend *storage.RedisClient
+
+func startProxy() {
+ s := proxy.NewProxy(&cfg, backend)
+ s.Start()
+}
+
+func startApi() {
+ s := api.NewApiServer(&cfg.Api, backend)
+ s.Start()
+}
+
+func startBlockUnlocker() {
+ u := payouts.NewBlockUnlocker(&cfg.BlockUnlocker, backend)
+ u.Start()
+}
+
+func startPayoutsProcessor() {
+ u := payouts.NewPayoutsProcessor(&cfg.Payouts, backend)
+ u.Start()
+}
+
+func startNewrelic() {
+ if cfg.NewrelicEnabled {
+ nr := gorelic.NewAgent()
+ nr.Verbose = cfg.NewrelicVerbose
+ nr.NewrelicLicense = cfg.NewrelicKey
+ nr.NewrelicName = cfg.NewrelicName
+ nr.Run()
+ }
+}
+
+func readConfig(cfg *proxy.Config) {
+ configFileName := "config.json"
+ if len(os.Args) > 1 {
+ configFileName = os.Args[1]
+ }
+ configFileName, _ = filepath.Abs(configFileName)
+ log.Printf("Loading config: %v", configFileName)
+
+ configFile, err := os.Open(configFileName)
+ if err != nil {
+ log.Fatal("File error: ", err.Error())
+ }
+ defer configFile.Close()
+ jsonParser := json.NewDecoder(configFile)
+ if err := jsonParser.Decode(&cfg); err != nil {
+ log.Fatal("Config error: ", err.Error())
+ }
+}
+
+func main() {
+ readConfig(&cfg)
+
+ if cfg.Threads > 0 {
+ runtime.GOMAXPROCS(cfg.Threads)
+ log.Printf("Running with %v threads", cfg.Threads)
+ }
+
+ startNewrelic()
+
+ backend = storage.NewRedisClient(&cfg.Redis, cfg.Coin)
+ pong, err := backend.Check()
+ if err != nil {
+ log.Printf("Can't establish connection to backend: %v", err)
+ } else {
+ log.Printf("Backend check reply: %v", pong)
+ }
+
+ if cfg.Proxy.Enabled {
+ go startProxy()
+ }
+ if cfg.Api.Enabled {
+ go startApi()
+ }
+ if cfg.BlockUnlocker.Enabled {
+ go startBlockUnlocker()
+ }
+ if cfg.Payouts.Enabled {
+ go startPayoutsProcessor()
+ }
+ quit := make(chan bool)
+ <-quit
+}
diff --git a/payouts/payer.go b/payouts/payer.go
new file mode 100644
index 0000000..af66de1
--- /dev/null
+++ b/payouts/payer.go
@@ -0,0 +1,135 @@
+package payouts
+
+import (
+ "log"
+ "math/big"
+ "time"
+
+ "../rpc"
+ "../storage"
+
+ "github.com/ethereum/go-ethereum/common"
+)
+
+type PayoutsConfig struct {
+ Enabled bool `json:"enabled"`
+ Interval string `json:"interval"`
+ Daemon string `json:"daemon"`
+ Timeout string `json:"timeout"`
+ Address string `json:"address"`
+ Gas string `json:"gas"`
+ GasPrice string `json:"gasPrice"`
+ // In Shannon
+ Threshold int64 `json:"threshold"`
+}
+
+func (self PayoutsConfig) GasHex() string {
+ x := common.String2Big(self.Gas)
+ return common.BigToHash(x).Hex()
+}
+
+func (self PayoutsConfig) GasPriceHex() string {
+ x := common.String2Big(self.GasPrice)
+ return common.BigToHash(x).Hex()
+}
+
+type PayoutsProcessor struct {
+ config *PayoutsConfig
+ backend *storage.RedisClient
+ rpc *rpc.RPCClient
+ halt bool
+}
+
+func NewPayoutsProcessor(cfg *PayoutsConfig, backend *storage.RedisClient) *PayoutsProcessor {
+ u := &PayoutsProcessor{config: cfg, backend: backend}
+ u.rpc = rpc.NewRPCClient("PayoutsProcessor", cfg.Daemon, cfg.Timeout)
+ return u
+}
+
+func (u *PayoutsProcessor) Start() {
+ log.Println("Starting payouts processor")
+ intv, _ := time.ParseDuration(u.config.Interval)
+ timer := time.NewTimer(intv)
+ log.Printf("Set block payout interval to %v", intv)
+
+ // Immediately process payouts after start
+ u.process()
+ timer.Reset(intv)
+
+ go func() {
+ for {
+ select {
+ case <-timer.C:
+ u.process()
+ timer.Reset(intv)
+ }
+ }
+ }()
+}
+
+func (u *PayoutsProcessor) process() {
+ if u.halt {
+ log.Println("Payments suspended due to last critical error")
+ return
+ }
+ mustPay := 0
+ minersPaid := 0
+ totalAmount := big.NewInt(0)
+ payees, err := u.backend.GetPayees()
+ if err != nil {
+ log.Printf("Error while retrieving payees from backend: %v", err)
+ return
+ }
+
+ for _, login := range payees {
+ amount, _ := u.backend.GetBalance(login)
+ if amount <= 0 {
+ continue
+ }
+
+ gweiAmount := big.NewInt(amount)
+ if !u.reachedThreshold(gweiAmount) {
+ continue
+ }
+ mustPay++
+
+ // Gwei^2 = Wei
+ weiAmount := gweiAmount.Mul(gweiAmount, common.Shannon)
+ value := common.BigToHash(weiAmount).Hex()
+ txHash, err := u.rpc.SendTransaction(u.config.Address, login, u.config.GasHex(), u.config.GasPriceHex(), value)
+ if err != nil {
+ log.Printf("Failed to send payment: %v", err)
+ u.halt = true
+ break
+ }
+ minersPaid++
+ totalAmount.Add(totalAmount, big.NewInt(amount))
+ log.Printf("Paid %v Shannon to %v, TxHash: %v", amount, login, txHash)
+
+ err = u.backend.UpdateBalance(login, txHash, amount)
+ if err != nil {
+ log.Printf("DANGER: Failed to update balance for %v with %v. TX: %v. Error is: %v", login, amount, txHash, err)
+ u.halt = true
+ return
+ }
+ // Wait for TX confirmation before further payouts
+ for {
+ log.Printf("Waiting for TX to get confirmed: %v", txHash)
+ time.Sleep(15 * time.Second)
+ receipt, err := u.rpc.GetTxReceipt(txHash)
+ if err != nil {
+ log.Printf("Failed to get tx receipt for %v: %v", txHash, err)
+ }
+ if receipt != nil {
+ break
+ }
+ }
+ log.Printf("Payout TX confirmed: %v", txHash)
+ }
+ log.Printf("Paid total %v Shannon to %v of %v payees", totalAmount, minersPaid, mustPay)
+}
+
+func (self PayoutsProcessor) reachedThreshold(amount *big.Int) bool {
+ x := big.NewInt(self.config.Threshold).Cmp(amount)
+ return x < 0 // Threshold is less than amount
+}
diff --git a/payouts/unlocker.go b/payouts/unlocker.go
new file mode 100644
index 0000000..b34f818
--- /dev/null
+++ b/payouts/unlocker.go
@@ -0,0 +1,460 @@
+package payouts
+
+import (
+ "fmt"
+ "log"
+ "math/big"
+ "strconv"
+ "strings"
+ "time"
+
+ "../rpc"
+ "../storage"
+ "../util"
+
+ "github.com/ethereum/go-ethereum/common"
+)
+
+type UnlockerConfig struct {
+ Enabled bool `json:"enabled"`
+ PoolFee float64 `json:"poolFee"`
+ Depth int64 `json:"depth"`
+ ImmatureDepth int64 `json:"immatureDepth"`
+ Interval string `json:"interval"`
+ Daemon string `json:"daemon"`
+ Timeout string `json:"timeout"`
+}
+
+var constRewardInEther = new(big.Int).SetInt64(5)
+var constReward = new(big.Int).Mul(constRewardInEther, common.Ether)
+var uncleReward = new(big.Int).Div(constReward, new(big.Int).SetInt64(32))
+
+type BlockUnlocker struct {
+ config *UnlockerConfig
+ backend *storage.RedisClient
+ rpc *rpc.RPCClient
+ halt bool
+}
+
+func NewBlockUnlocker(cfg *UnlockerConfig, backend *storage.RedisClient) *BlockUnlocker {
+ if cfg.Depth < 10 {
+ log.Fatalf("Block maturity depth can't be < 10, your depth is %v", cfg.Depth)
+ }
+ if cfg.ImmatureDepth < 10 {
+ log.Fatalf("Immature depth can't be < 10, your depth is %v", cfg.ImmatureDepth)
+ }
+ u := &BlockUnlocker{config: cfg, backend: backend}
+ u.rpc = rpc.NewRPCClient("BlockUnlocker", cfg.Daemon, cfg.Timeout)
+ return u
+}
+
+func (u *BlockUnlocker) Start() {
+ log.Println("Starting block unlocker")
+ intv, _ := time.ParseDuration(u.config.Interval)
+ timer := time.NewTimer(intv)
+ log.Printf("Set block unlock interval to %v", intv)
+
+ // Immediately unlock after start
+ u.unlockPendingBlocks()
+ u.unlockAndCreditMiners()
+ timer.Reset(intv)
+
+ go func() {
+ for {
+ select {
+ case <-timer.C:
+ u.unlockPendingBlocks()
+ u.unlockAndCreditMiners()
+ timer.Reset(intv)
+ }
+ }
+ }()
+}
+
+type UnlockResult struct {
+ maturedBlocks []*storage.BlockData
+ orphanedBlocks []*storage.BlockData
+ orphans int
+ uncles int
+ blocks int
+}
+
+/* FIXME: Geth does not provide consistent state when you need both new height and new job,
+ * so in redis I am logging just what I have in a pool state on the moment when block found.
+ * Having very likely incorrect height in database results in a weird block unlocking scheme,
+ * when I have to check what the hell we actually found and traversing all the blocks with height-N and htight+N
+ * to make sure we will find it. We can't rely on block height here, it's just a reference point.
+ * You can say I can search with block hash, but we don't know block hash of submitted block until we actually found
+ * it traversing all the blocks around our height.
+ * ISSUE: https://github.com/ethereum/go-ethereum/issues/2333
+ */
+func (u *BlockUnlocker) unlockCandidates(candidates []*storage.BlockData) (*UnlockResult, error) {
+ var maturedBlocks []*storage.BlockData
+ var orphanedBlocks []*storage.BlockData
+ blocksUnlocked := 0
+ unclesUnlocked := 0
+ orphans := 0
+
+ // Data row is: "height:nonce:powHash:mixDigest:timestamp:diff:totalShares"
+ for _, candidate := range candidates {
+ block, err := u.rpc.GetBlockByHeight(candidate.Height)
+ if err != nil {
+ return nil, fmt.Errorf("Error while retrieving block %v from node: %v", candidate.Height, err)
+ }
+ if block == nil {
+ return nil, fmt.Errorf("Error while retrieving block %v from node, wrong node height", candidate.Height)
+ }
+
+ if block.Nonce == candidate.Nonce {
+ blocksUnlocked++
+ err = u.handleCandidate(block, candidate)
+ if err != nil {
+ return nil, err
+ }
+ maturedBlocks = append(maturedBlocks, candidate)
+ log.Printf("Mature block %v with %v tx, hash: %v", candidate.Height, len(block.Transactions), block.Hash[0:8])
+ } else {
+ // Temporarily mark as lost
+ orphan := true
+ log.Printf("Probably uncle block %v with nonce: %v", candidate.Height, candidate.Nonce)
+
+ /* Search for block that can include this one as uncle.
+ * Also we are searching for a normal block with wrong height here by traversing 16 blocks back and forward.
+ */
+ for i := int64(-16); i < 16; i++ {
+ nephewHeight := candidate.Height + i
+ nephewBlock, err := u.rpc.GetBlockByHeight(nephewHeight)
+ if err != nil {
+ log.Printf("Error while retrieving block %v from node: %v", nephewHeight, err)
+ return nil, err
+ }
+ if nephewBlock == nil {
+ return nil, fmt.Errorf("Error while retrieving block %v from node, wrong node height", nephewHeight)
+ }
+
+ // Check incorrect block height
+ if candidate.Nonce == nephewBlock.Nonce {
+ orphan = false
+ blocksUnlocked++
+ err = u.handleCandidate(nephewBlock, candidate)
+ if err != nil {
+ return nil, err
+ }
+ rightHeight, err := strconv.ParseInt(strings.Replace(nephewBlock.Number, "0x", "", -1), 16, 64)
+ if err != nil {
+ u.halt = true
+ log.Printf("Can't parse block number: %v", err)
+ return nil, err
+ }
+ log.Printf("Block %v has incorrect height, correct height is %v", candidate.Height, rightHeight)
+ maturedBlocks = append(maturedBlocks, candidate)
+ log.Printf("Mature block %v with %v tx, hash: %v", candidate.Height, len(block.Transactions), block.Hash[0:8])
+ break
+ }
+
+ if len(nephewBlock.Uncles) == 0 {
+ continue
+ }
+
+ // Trying to find uncle in current block during our forward check
+ for uncleIndex, uncleHash := range nephewBlock.Uncles {
+ reply, err := u.rpc.GetUncleByBlockNumberAndIndex(nephewHeight, uncleIndex)
+ if err != nil {
+ return nil, fmt.Errorf("Error while retrieving block %v from node: %v", uncleHash, err)
+ }
+ if reply == nil {
+ return nil, fmt.Errorf("Error while retrieving block %v from node, wrong node height", nephewHeight)
+ }
+
+ // Found uncle
+ if reply.Nonce == candidate.Nonce {
+ orphan = false
+ unclesUnlocked++
+ uncleHeight, err := strconv.ParseInt(strings.Replace(reply.Number, "0x", "", -1), 16, 64)
+ if err != nil {
+ u.halt = true
+ log.Printf("Can't parse uncle block number: %v", err)
+ return nil, err
+ }
+ reward := getUncleReward(uncleHeight, nephewHeight)
+ candidate.Uncle = true
+ candidate.Orphan = false
+ candidate.Hash = reply.Hash
+ candidate.Reward = reward
+ maturedBlocks = append(maturedBlocks, candidate)
+ log.Printf("Mature uncle block %v/%v of reward %v with hash: %v", candidate.Height, nephewHeight, util.FormatReward(reward), reply.Hash[0:8])
+ break
+ }
+ }
+
+ if !orphan {
+ break
+ }
+ }
+
+ // Block is lost, we didn't find any valid block or uncle matching our data in a blockchain
+ if orphan {
+ orphans++
+ candidate.Uncle = false
+ candidate.Orphan = true
+ orphanedBlocks = append(orphanedBlocks, candidate)
+ log.Printf("Rejected block %v", candidate)
+ }
+ }
+ }
+ return &UnlockResult{
+ maturedBlocks: maturedBlocks,
+ orphanedBlocks: orphanedBlocks,
+ orphans: orphans,
+ blocks: blocksUnlocked,
+ uncles: unclesUnlocked,
+ }, nil
+}
+
+func (u *BlockUnlocker) handleCandidate(block *rpc.GetBlockReply, candidate *storage.BlockData) error {
+ // Initial 5 Ether static reward
+ reward := big.NewInt(0)
+ reward.Add(reward, constReward)
+
+ // Add TX fees
+ extraTxReward, err := u.getExtraRewardForTx(block)
+ if err != nil {
+ return fmt.Errorf("Error while fetching TX receipt: %v", err)
+ }
+ reward.Add(reward, extraTxReward)
+
+ // Add reward for including uncles
+ rewardForUncles := big.NewInt(0).Mul(uncleReward, big.NewInt(int64(len(block.Uncles))))
+ reward.Add(reward, rewardForUncles)
+
+ candidate.Uncle = false
+ candidate.Orphan = false
+ candidate.Hash = block.Hash
+ candidate.Reward = reward
+ return nil
+}
+
+func (u *BlockUnlocker) unlockPendingBlocks() {
+ if u.halt {
+ log.Println("Unlocking suspended due to last critical error")
+ return
+ }
+
+ current, err := u.rpc.GetPendingBlock()
+ if err != nil {
+ u.halt = true
+ log.Printf("Unable to get current blockchain height from node: %v", err)
+ return
+ }
+ currentHeight, err := strconv.ParseInt(strings.Replace(current.Number, "0x", "", -1), 16, 64)
+ if err != nil {
+ u.halt = true
+ log.Printf("Can't parse pending block number: %v", err)
+ return
+ }
+
+ candidates, err := u.backend.GetCandidates(currentHeight - u.config.ImmatureDepth)
+ if err != nil {
+ u.halt = true
+ log.Printf("Failed to get block candidates from backend: %v", err)
+ return
+ }
+
+ result, err := u.unlockCandidates(candidates)
+ if err != nil {
+ u.halt = true
+ log.Printf("Failed to unlock blocks: %v", err)
+ return
+ }
+ log.Printf("Immature %v blocks, %v uncles, %v orphans", result.blocks, result.uncles, result.orphans)
+
+ err = u.backend.WritePendingOrphans(result.orphanedBlocks)
+ if err != nil {
+ u.halt = true
+ log.Printf("Failed to insert orphaned blocks into backend: %v", err)
+ return
+ } else {
+ log.Printf("Inserted %v orphaned blocks to backend", result.orphans)
+ }
+
+ totalRevenue := new(big.Rat)
+ totalMinersProfit := new(big.Rat)
+ totalPoolProfit := new(big.Rat)
+
+ for _, block := range result.maturedBlocks {
+ revenue, minersProfit, poolProfit, roundRewards, err := u.calculateRewards(block)
+ if err != nil {
+ u.halt = true
+ log.Printf("Failed to calculate rewards for round %v: %v", block.RoundKey(), err)
+ return
+ }
+ err = u.backend.WriteImmatureBlock(block, roundRewards)
+ if err != nil {
+ u.halt = true
+ log.Printf("Failed to credit rewards for round %v: %v", block.RoundKey(), err)
+ return
+ }
+ totalRevenue.Add(totalRevenue, revenue)
+ totalMinersProfit.Add(totalMinersProfit, minersProfit)
+ totalPoolProfit.Add(totalPoolProfit, poolProfit)
+
+ logEntry := fmt.Sprintf(
+ "IMMATURE %v: revenue %v, miners profit %v, pool profit: %v",
+ block.RoundKey(),
+ util.FormatRatReward(revenue),
+ util.FormatRatReward(minersProfit),
+ util.FormatRatReward(poolProfit),
+ )
+ entries := []string{logEntry}
+ for login, reward := range roundRewards {
+ entries = append(entries, fmt.Sprintf("\tREWARD %v: %v : %v", block.RoundKey(), login, reward))
+ }
+ log.Println(strings.Join(entries, "\n"))
+ }
+
+ log.Printf(
+ "IMMATURE SESSION: revenue %v, miners profit %v, pool profit: %v",
+ util.FormatRatReward(totalRevenue),
+ util.FormatRatReward(totalMinersProfit),
+ util.FormatRatReward(totalPoolProfit),
+ )
+}
+
+func (u *BlockUnlocker) unlockAndCreditMiners() {
+ if u.halt {
+ log.Println("Unlocking suspended due to last critical error")
+ return
+ }
+
+ current, err := u.rpc.GetPendingBlock()
+ if err != nil {
+ u.halt = true
+ log.Printf("Unable to get current blockchain height from node: %v", err)
+ return
+ }
+ currentHeight, err := strconv.ParseInt(strings.Replace(current.Number, "0x", "", -1), 16, 64)
+ if err != nil {
+ u.halt = true
+ log.Printf("Can't parse pending block number: %v", err)
+ return
+ }
+
+ immature, err := u.backend.GetImmatureBlocks(currentHeight - u.config.Depth)
+ if err != nil {
+ u.halt = true
+ log.Printf("Failed to get block candidates from backend: %v", err)
+ return
+ }
+
+ result, err := u.unlockCandidates(immature)
+ if err != nil {
+ u.halt = true
+ log.Printf("Failed to unlock blocks: %v", err)
+ return
+ }
+ log.Printf("Unlocked %v blocks, %v uncles, %v orphans", result.blocks, result.uncles, result.orphans)
+
+ for _, block := range result.orphanedBlocks {
+ err = u.backend.WriteOrphan(block)
+ if err != nil {
+ u.halt = true
+ log.Printf("Failed to insert orphaned block into backend: %v", err)
+ return
+ }
+ }
+ log.Printf("Inserted %v orphaned blocks to backend", result.orphans)
+
+ totalRevenue := new(big.Rat)
+ totalMinersProfit := new(big.Rat)
+ totalPoolProfit := new(big.Rat)
+
+ for _, block := range result.maturedBlocks {
+ revenue, minersProfit, poolProfit, roundRewards, err := u.calculateRewards(block)
+ if err != nil {
+ u.halt = true
+ log.Printf("Failed to calculate rewards for round %v: %v", block.RoundKey(), err)
+ return
+ }
+ err = u.backend.WriteMaturedBlock(block, roundRewards)
+ if err != nil {
+ u.halt = true
+ log.Printf("Failed to credit rewards for round %v: %v", block.RoundKey(), err)
+ return
+ }
+ totalRevenue.Add(totalRevenue, revenue)
+ totalMinersProfit.Add(totalMinersProfit, minersProfit)
+ totalPoolProfit.Add(totalPoolProfit, poolProfit)
+
+ logEntry := fmt.Sprintf(
+ "MATURED %v: revenue %v, miners profit %v, pool profit: %v",
+ block.RoundKey(),
+ util.FormatRatReward(revenue),
+ util.FormatRatReward(minersProfit),
+ util.FormatRatReward(poolProfit),
+ )
+ entries := []string{logEntry}
+ for login, reward := range roundRewards {
+ entries = append(entries, fmt.Sprintf("\tREWARD %v: %v : %v", block.RoundKey(), login, reward))
+ }
+ log.Println(strings.Join(entries, "\n"))
+ }
+
+ log.Printf(
+ "MATURE SESSION: revenue %v, miners profit %v, pool profit: %v",
+ util.FormatRatReward(totalRevenue),
+ util.FormatRatReward(totalMinersProfit),
+ util.FormatRatReward(totalPoolProfit),
+ )
+}
+
+func (u *BlockUnlocker) calculateRewards(block *storage.BlockData) (*big.Rat, *big.Rat, *big.Rat, map[string]int64, error) {
+ rewards := make(map[string]int64)
+ revenue := new(big.Rat).SetInt(block.Reward)
+
+ feePercent := new(big.Rat).SetFloat64(u.config.PoolFee / 100)
+ poolProfit := new(big.Rat).Mul(revenue, feePercent)
+
+ minersProfit := new(big.Rat).Sub(revenue, poolProfit)
+
+ shares, err := u.backend.GetRoundShares(uint64(block.Height), block.Nonce)
+ if err != nil {
+ return nil, nil, nil, nil, err
+ }
+
+ for login, n := range shares {
+ percent := big.NewRat(n, block.TotalShares)
+ workerReward := new(big.Rat).Mul(minersProfit, percent)
+
+ shannon := new(big.Rat).SetInt(common.Shannon)
+ workerReward = workerReward.Quo(workerReward, shannon)
+ amount, _ := strconv.ParseInt(workerReward.FloatString(0), 10, 64)
+ rewards[login] += amount
+ }
+
+ return revenue, minersProfit, poolProfit, rewards, nil
+}
+
+func getUncleReward(uHeight, height int64) *big.Int {
+ reward := new(big.Int).Set(constReward)
+ reward.Mul(big.NewInt(uHeight+8-height), reward)
+ reward.Div(reward, big.NewInt(8))
+ return reward
+}
+
+func (u *BlockUnlocker) getExtraRewardForTx(block *rpc.GetBlockReply) (*big.Int, error) {
+ amount := new(big.Int)
+
+ for _, tx := range block.Transactions {
+ receipt, err := u.rpc.GetTxReceipt(tx.Hash)
+ if err != nil {
+ return nil, err
+ }
+ if receipt != nil {
+ gasUsed := common.String2Big(receipt.GasUsed)
+ gasPrice := common.String2Big(tx.GasPrice)
+ fee := new(big.Int).Mul(gasUsed, gasPrice)
+ amount.Add(amount, fee)
+ }
+ }
+ return amount, nil
+}
diff --git a/policy/policy.go b/policy/policy.go
new file mode 100644
index 0000000..64727bf
--- /dev/null
+++ b/policy/policy.go
@@ -0,0 +1,308 @@
+package policy
+
+import (
+ "fmt"
+ "log"
+ "os/exec"
+ "strings"
+ "sync"
+ "sync/atomic"
+ "time"
+
+ "../storage"
+ "../util"
+)
+
+type Config struct {
+ Workers int `json:"workers"`
+ Banning Banning `json:"banning"`
+ Limits Limits `json:"limits"`
+ ResetInterval string `json:"resetInterval"`
+ RefreshInterval string `json:"refreshInterval"`
+}
+
+type Limits struct {
+ Enabled bool `json:"enabled"`
+ Limit int32 `json:"limit"`
+ Grace string `json:"grace"`
+ LimitJump int32 `json:"limitJump"`
+}
+
+type Banning struct {
+ Enabled bool `json:"enabled"`
+ IPSet string `json:"ipset"`
+ Timeout int64 `json:"timeout"`
+ InvalidPercent float32 `json:"invalidPercent"`
+ CheckThreshold int32 `json:"checkThreshold"`
+ MalformedLimit int32 `json:"malformedLimit"`
+}
+
+type Stats struct {
+ sync.Mutex
+ // We are using atomic with LastBeat,
+ // so moving it before the rest in order to avoid alignment issue
+ LastBeat int64
+ BannedAt int64
+ ValidShares int32
+ InvalidShares int32
+ Malformed int32
+ ConnLimit int32
+ Banned int32
+}
+
+type PolicyServer struct {
+ sync.RWMutex
+ statsMu sync.RWMutex
+ config *Config
+ stats map[string]*Stats
+ banChannel chan string
+ startedAt int64
+ grace int64
+ timeout int64
+ blacklist []string
+ whitelist []string
+ storage *storage.RedisClient
+}
+
+func Start(cfg *Config, storage *storage.RedisClient) *PolicyServer {
+ s := &PolicyServer{config: cfg, startedAt: util.MakeTimestamp()}
+ grace, _ := time.ParseDuration(cfg.Limits.Grace)
+ s.grace = int64(grace / time.Millisecond)
+ s.banChannel = make(chan string, 64)
+ s.stats = make(map[string]*Stats)
+ s.storage = storage
+ s.refreshState()
+
+ timeout, _ := time.ParseDuration(s.config.ResetInterval)
+ s.timeout = int64(timeout / time.Millisecond)
+
+ resetIntv, _ := time.ParseDuration(s.config.ResetInterval)
+ resetTimer := time.NewTimer(resetIntv)
+ log.Printf("Set policy stats reset every %v", resetIntv)
+
+ refreshIntv, _ := time.ParseDuration(s.config.RefreshInterval)
+ refreshTimer := time.NewTimer(refreshIntv)
+ log.Printf("Set policy state refresh every %v", refreshIntv)
+
+ go func() {
+ for {
+ select {
+ case <-resetTimer.C:
+ s.resetStats()
+ resetTimer.Reset(resetIntv)
+ case <-refreshTimer.C:
+ s.refreshState()
+ refreshTimer.Reset(refreshIntv)
+ }
+ }
+ }()
+
+ for i := 0; i < s.config.Workers; i++ {
+ s.startPolicyWorker()
+ }
+ log.Printf("Running with %v policy workers", s.config.Workers)
+ return s
+}
+
+func (s *PolicyServer) startPolicyWorker() {
+ go func() {
+ for {
+ select {
+ case ip := <-s.banChannel:
+ s.doBan(ip)
+ }
+ }
+ }()
+}
+
+func (s *PolicyServer) resetStats() {
+ now := util.MakeTimestamp()
+ banningTimeout := s.config.Banning.Timeout * 1000
+ total := 0
+ s.statsMu.Lock()
+ defer s.statsMu.Unlock()
+
+ for key, m := range s.stats {
+ lastBeat := atomic.LoadInt64(&m.LastBeat)
+ bannedAt := atomic.LoadInt64(&m.BannedAt)
+
+ if now-bannedAt >= banningTimeout {
+ atomic.StoreInt64(&m.BannedAt, 0)
+ if atomic.CompareAndSwapInt32(&m.Banned, 1, 0) {
+ log.Printf("Ban dropped for %v", key)
+ }
+ }
+ if now-lastBeat >= s.timeout {
+ delete(s.stats, key)
+ total++
+ }
+ }
+ log.Printf("Flushed stats for %v IP addresses", total)
+}
+
+func (s *PolicyServer) refreshState() {
+ s.Lock()
+ defer s.Unlock()
+ var err error
+
+ s.blacklist, err = s.storage.GetBlacklist()
+ if err != nil {
+ log.Printf("Failed to get blacklist from backend: %v", err)
+ }
+ s.whitelist, err = s.storage.GetWhitelist()
+ if err != nil {
+ log.Printf("Failed to get whitelist from backend: %v", err)
+ }
+ log.Println("Policy state refresh complete")
+}
+
+func (s *PolicyServer) NewStats() *Stats {
+ x := &Stats{
+ ConnLimit: s.config.Limits.Limit,
+ }
+ x.heartbeat()
+ return x
+}
+
+func (s *PolicyServer) Get(ip string) *Stats {
+ s.statsMu.RLock()
+ defer s.statsMu.RUnlock()
+
+ if x, ok := s.stats[ip]; ok {
+ x.heartbeat()
+ return x
+ }
+ x := s.NewStats()
+ s.stats[ip] = x
+ return x
+}
+
+func (s *PolicyServer) ApplyLimitPolicy(ip string) bool {
+ if !s.config.Limits.Enabled {
+ return true
+ }
+ now := util.MakeTimestamp()
+ if now-s.startedAt > s.grace {
+ return s.Get(ip).decrLimit() > 0
+ }
+ return true
+}
+
+func (s *PolicyServer) ApplyLoginPolicy(addy, ip string) bool {
+ if s.InBlackList(addy) {
+ x := s.Get(ip)
+ s.forceBan(x, ip)
+ return false
+ }
+ return true
+}
+
+func (s *PolicyServer) ApplyMalformedPolicy(ip string) {
+ x := s.Get(ip)
+ n := x.incrMalformed()
+ if n >= s.config.Banning.MalformedLimit {
+ s.forceBan(x, ip)
+ }
+}
+
+func (s *PolicyServer) ApplySharePolicy(ip string, validShare bool) bool {
+ x := s.Get(ip)
+ if validShare && s.config.Limits.Enabled {
+ s.Get(ip).incrLimit(s.config.Limits.LimitJump)
+ }
+ x.Lock()
+
+ if validShare {
+ x.ValidShares++
+ if s.config.Limits.Enabled {
+ x.incrLimit(s.config.Limits.LimitJump)
+ }
+ } else {
+ x.InvalidShares++
+ }
+
+ totalShares := x.ValidShares + x.InvalidShares
+ if totalShares < s.config.Banning.CheckThreshold {
+ x.Unlock()
+ return true
+ }
+ validShares := float32(x.ValidShares)
+ invalidShares := float32(x.InvalidShares)
+ x.resetShares()
+ x.Unlock()
+
+ if invalidShares == 0 {
+ return true
+ }
+
+ // Can be +Inf or value, previous check prevents NaN
+ ratio := invalidShares / validShares
+
+ if ratio >= s.config.Banning.InvalidPercent/100.0 {
+ s.forceBan(x, ip)
+ return false
+ }
+ return true
+}
+
+func (x *Stats) resetShares() {
+ x.ValidShares = 0
+ x.InvalidShares = 0
+}
+
+func (s *PolicyServer) forceBan(x *Stats, ip string) {
+ if !s.config.Banning.Enabled || s.InWhiteList(ip) {
+ return
+ }
+ atomic.StoreInt64(&x.BannedAt, util.MakeTimestamp())
+
+ if atomic.CompareAndSwapInt32(&x.Banned, 0, 1) {
+ if len(s.config.Banning.IPSet) > 0 {
+ s.banChannel <- ip
+ }
+ }
+}
+
+func (x *Stats) incrLimit(n int32) {
+ atomic.AddInt32(&x.ConnLimit, n)
+}
+
+func (x *Stats) incrMalformed() int32 {
+ return atomic.AddInt32(&x.Malformed, 1)
+}
+
+func (x *Stats) decrLimit() int32 {
+ return atomic.AddInt32(&x.ConnLimit, -1)
+}
+
+func (s *PolicyServer) InBlackList(addy string) bool {
+ s.RLock()
+ defer s.RUnlock()
+ return util.StringInSlice(addy, s.blacklist)
+}
+
+func (s *PolicyServer) InWhiteList(ip string) bool {
+ s.RLock()
+ defer s.RUnlock()
+ return util.StringInSlice(ip, s.whitelist)
+}
+
+func (s *PolicyServer) doBan(ip string) {
+ set, timeout := s.config.Banning.IPSet, s.config.Banning.Timeout
+ cmd := fmt.Sprintf("sudo ipset add %s %s timeout %v -!", set, ip, timeout)
+ args := strings.Fields(cmd)
+ head := args[0]
+ args = args[1:]
+
+ log.Printf("Banned %v with timeout %v on ipset %s", ip, timeout, set)
+
+ _, err := exec.Command(head, args...).Output()
+ if err != nil {
+ log.Printf("CMD Error: %s", err)
+ }
+}
+
+func (x *Stats) heartbeat() {
+ now := util.MakeTimestamp()
+ atomic.StoreInt64(&x.LastBeat, now)
+}
diff --git a/proxy/api.go b/proxy/api.go
new file mode 100644
index 0000000..73493e8
--- /dev/null
+++ b/proxy/api.go
@@ -0,0 +1,35 @@
+package proxy
+
+import (
+ "encoding/json"
+ "log"
+ "net/http"
+ "sync/atomic"
+)
+
+func (s *ProxyServer) StatusIndex(w http.ResponseWriter, r *http.Request) {
+ w.Header().Set("Content-Type", "application/json; charset=UTF-8")
+ w.Header().Set("Access-Control-Allow-Origin", "*")
+ w.Header().Set("Cache-Control", "no-cache")
+ w.WriteHeader(http.StatusOK)
+
+ reply := make(map[string]interface{})
+
+ var upstreams []interface{}
+ current := atomic.LoadInt32(&s.upstream)
+
+ for i, u := range s.upstreams {
+ upstream := map[string]interface{}{
+ "name": u.Name,
+ "sick": u.Sick(),
+ "current": current == int32(i),
+ }
+ upstreams = append(upstreams, upstream)
+ }
+ reply["upstreams"] = upstreams
+
+ err := json.NewEncoder(w).Encode(reply)
+ if err != nil {
+ log.Println("Error serializing API response: ", err)
+ }
+}
diff --git a/proxy/blocks.go b/proxy/blocks.go
new file mode 100644
index 0000000..86985a7
--- /dev/null
+++ b/proxy/blocks.go
@@ -0,0 +1,116 @@
+package proxy
+
+import (
+ "log"
+ "math/big"
+ "strconv"
+ "strings"
+ "sync"
+
+ "../rpc"
+ "../util"
+
+ "github.com/ethereum/go-ethereum/common"
+)
+
+const maxBacklog = 3
+
+type BlockTemplate struct {
+ sync.RWMutex
+ Header string
+ Seed string
+ Target string
+ Difficulty *big.Int
+ Height uint64
+ GetPendingBlockCache *rpc.GetBlockReplyPart
+ nonces map[string]bool
+ headers map[string]uint64
+}
+
+func (t *BlockTemplate) submit(nonce string) bool {
+ t.Lock()
+ defer t.Unlock()
+ _, exist := t.nonces[nonce]
+ if exist {
+ return true
+ }
+ t.nonces[nonce] = true
+ return false
+}
+
+type Block struct {
+ difficulty *big.Int
+ hashNoNonce common.Hash
+ nonce uint64
+ mixDigest common.Hash
+ number uint64
+}
+
+func (b Block) Difficulty() *big.Int { return b.difficulty }
+func (b Block) HashNoNonce() common.Hash { return b.hashNoNonce }
+func (b Block) Nonce() uint64 { return b.nonce }
+func (b Block) MixDigest() common.Hash { return b.mixDigest }
+func (b Block) NumberU64() uint64 { return b.number }
+
+func (s *ProxyServer) fetchBlockTemplate() {
+ rpc := s.rpc()
+ t := s.currentBlockTemplate()
+ pendingReply, height, diff, err := s.fetchPendingBlock()
+ if err != nil {
+ log.Printf("Error while refreshing pending block on %s: %s", rpc.Name, err)
+ return
+ }
+ reply, err := rpc.GetWork()
+ if err != nil {
+ log.Printf("Error while refreshing block template on %s: %s", rpc.Name, err)
+ return
+ }
+ // No need to update, we have fresh job
+ if t != nil && t.Header == reply[0] {
+ return
+ }
+
+ pendingReply.Difficulty = util.ToHex(s.config.Proxy.Difficulty)
+
+ newTemplate := BlockTemplate{
+ Header: reply[0],
+ Seed: reply[1],
+ Target: reply[2],
+ Height: height,
+ Difficulty: big.NewInt(diff),
+ GetPendingBlockCache: pendingReply,
+ nonces: make(map[string]bool),
+ headers: make(map[string]uint64),
+ }
+ // Copy headers backlog and add current one
+ newTemplate.headers[reply[0]] = height
+ if t != nil {
+ for k, v := range t.headers {
+ if v >= height-maxBacklog {
+ newTemplate.headers[k] = v
+ }
+ }
+ }
+ s.blockTemplate.Store(&newTemplate)
+ log.Printf("New block to mine on %s at height: %d / %s", rpc.Name, height, reply[0][0:10])
+}
+
+func (s *ProxyServer) fetchPendingBlock() (*rpc.GetBlockReplyPart, uint64, int64, error) {
+ rpc := s.rpc()
+ reply, err := rpc.GetPendingBlock()
+ if err != nil {
+ log.Printf("Error while refreshing pending block on %s: %s", rpc.Name, err)
+ return nil, 0, 0, err
+ }
+ blockNumber, err := strconv.ParseUint(strings.Replace(reply.Number, "0x", "", -1), 16, 64)
+ if err != nil {
+ log.Println("Can't parse pending block number")
+ return nil, 0, 0, err
+ }
+ blockDiff, err := strconv.ParseInt(strings.Replace(reply.Difficulty, "0x", "", -1), 16, 64)
+ if err != nil {
+ log.Println("Can't parse pending block difficulty")
+ return nil, 0, 0, err
+ }
+ return reply, blockNumber, blockDiff, nil
+}
diff --git a/proxy/config.go b/proxy/config.go
new file mode 100644
index 0000000..3721423
--- /dev/null
+++ b/proxy/config.go
@@ -0,0 +1,52 @@
+package proxy
+
+import (
+ "../api"
+ "../payouts"
+ "../policy"
+ "../storage"
+)
+
+type Config struct {
+ Name string `json:"name"`
+ Proxy Proxy `json:"proxy"`
+ Api api.ApiConfig `json:"api"`
+ Upstream []Upstream `json:"upstream"`
+ UpstreamCheckInterval string `json:"upstreamCheckInterval"`
+
+ Threads int `json:"threads"`
+
+ Coin string `json:"coin"`
+ Redis storage.Config `json:"redis"`
+
+ BlockUnlocker payouts.UnlockerConfig `json:"unlocker"`
+ Payouts payouts.PayoutsConfig `json:"payouts"`
+
+ NewrelicName string `json:"newrelicName"`
+ NewrelicKey string `json:"newrelicKey"`
+ NewrelicVerbose bool `json:"newrelicVerbose"`
+ NewrelicEnabled bool `json:"newrelicEnabled"`
+}
+
+type Proxy struct {
+ Enabled bool `json:"enabled"`
+ Listen string `json:"listen"`
+ LimitHeadersSize int `json:"limitHeadersSize"`
+ LimitBodySize int64 `json:"limitBodySize"`
+ BehindReverseProxy bool `json:"behindReverseProxy"`
+ BlockRefreshInterval string `json:"blockRefreshInterval"`
+ Difficulty int64 `json:"difficulty"`
+ StateUpdateInterval string `json:"stateUpdateInterval"`
+ HashrateExpiration string `json:"hashrateExpiration"`
+
+ Policy policy.Config `json:"policy"`
+
+ MaxFails int64 `json:"maxFails"`
+ HealthCheck bool `json:"healthCheck"`
+}
+
+type Upstream struct {
+ Name string `json:"name"`
+ Url string `json:"url"`
+ Timeout string `json:"timeout"`
+}
diff --git a/proxy/handlers.go b/proxy/handlers.go
new file mode 100644
index 0000000..f2bc68d
--- /dev/null
+++ b/proxy/handlers.go
@@ -0,0 +1,68 @@
+package proxy
+
+import (
+ "log"
+ "regexp"
+
+ "../rpc"
+)
+
+var noncePattern *regexp.Regexp
+
+func init() {
+ noncePattern, _ = regexp.Compile("^0x[0-9a-f]{16}$")
+}
+
+func (s *ProxyServer) handleGetWorkRPC(cs *Session, login, id string) ([]string, *ErrorReply) {
+ t := s.currentBlockTemplate()
+ if t == nil || len(t.Header) == 0 || s.isSick() {
+ return nil, &ErrorReply{Code: -1, Message: "Work not ready"}
+ }
+ return []string{t.Header, t.Seed, s.diff}, nil
+}
+
+func (s *ProxyServer) handleSubmitRPC(cs *Session, login string, id string, params []string) (bool, *ErrorReply) {
+ m := NewMiner(login, id, cs.ip)
+
+ if len(params) != 3 {
+ s.policy.ApplyMalformedPolicy(cs.ip)
+ log.Printf("Malformed params from %s@%s", m.Login, m.IP)
+ return false, &ErrorReply{Code: -1, Message: "Malformed params", close: true}
+ }
+
+ if !noncePattern.MatchString(params[0]) {
+ s.policy.ApplyMalformedPolicy(cs.ip)
+ log.Printf("Malformed nonce from %s@%s", m.Login, m.IP)
+ return false, &ErrorReply{Code: -1, Message: "Malformed nonce", close: true}
+ }
+ t := s.currentBlockTemplate()
+ exist, validShare := m.processShare(s, t, params)
+ s.policy.ApplySharePolicy(m.IP, !exist && validShare)
+
+ if exist {
+ log.Printf("Duplicate share %s from %s@%s params: %v", params[0], m.Login, m.IP, params)
+ return false, &ErrorReply{Code: -1, Message: "Duplicate share", close: true}
+ }
+
+ if !validShare {
+ log.Printf("Invalid share from %s@%s with %v nonce", m.Login, m.IP, params[0])
+ return false, nil
+ }
+
+ log.Printf("Valid share from %s@%s", m.Login, m.IP)
+ return true, nil
+}
+
+func (s *ProxyServer) handleGetBlockByNumberRPC() *rpc.GetBlockReplyPart {
+ t := s.currentBlockTemplate()
+ var reply *rpc.GetBlockReplyPart
+ if t != nil {
+ reply = t.GetPendingBlockCache
+ }
+ return reply
+}
+
+func (s *ProxyServer) handleUnknownRPC(cs *Session, req *JSONRpcReq) *ErrorReply {
+ log.Printf("Unknown RPC method: %v", req)
+ return &ErrorReply{Code: -1, Message: "Invalid method"}
+}
diff --git a/proxy/miner.go b/proxy/miner.go
new file mode 100644
index 0000000..aa70b24
--- /dev/null
+++ b/proxy/miner.go
@@ -0,0 +1,95 @@
+package proxy
+
+import (
+ "log"
+ "math/big"
+ "strconv"
+ "strings"
+
+ "github.com/ethereum/ethash"
+ "github.com/ethereum/go-ethereum/common"
+)
+
+var hasher = ethash.New()
+
+type Miner struct {
+ Id string
+ Login string
+ IP string
+}
+
+func NewMiner(login, id, ip string) Miner {
+ if len(id) == 0 {
+ id = "0"
+ }
+ return Miner{Login: login, Id: id, IP: ip}
+}
+
+func (m Miner) key() string {
+ return strings.Join([]string{m.Login, m.Id}, ":")
+}
+
+func (m Miner) processShare(s *ProxyServer, t *BlockTemplate, params []string) (bool, bool) {
+ paramsOrig := params[:]
+
+ nonceHex := params[0]
+ hashNoNonce := params[1]
+ nonce, _ := strconv.ParseUint(strings.Replace(nonceHex, "0x", "", -1), 16, 64)
+ mixDigest := strings.ToLower(params[2])
+ shareDiff := s.config.Proxy.Difficulty
+
+ if _, ok := t.headers[hashNoNonce]; !ok {
+ log.Printf("Stale share from %v@%v", m.Login, m.IP)
+ return false, false
+ }
+
+ share := Block{
+ number: t.Height,
+ hashNoNonce: common.HexToHash(hashNoNonce),
+ difficulty: big.NewInt(shareDiff),
+ nonce: nonce,
+ mixDigest: common.HexToHash(mixDigest),
+ }
+
+ block := Block{
+ number: t.Height,
+ hashNoNonce: common.HexToHash(hashNoNonce),
+ difficulty: t.Difficulty,
+ nonce: nonce,
+ mixDigest: common.HexToHash(mixDigest),
+ }
+
+ if !hasher.Verify(share) {
+ return false, false
+ }
+
+ // In-Ram check for duplicate share
+ if t.submit(params[0]) {
+ return true, false
+ }
+
+ if hasher.Verify(block) {
+ _, err := s.rpc().SubmitBlock(paramsOrig)
+ if err != nil {
+ log.Printf("Block submission failure on height: %v for %v: %v", t.Height, t.Header, err)
+ } else {
+ s.fetchBlockTemplate()
+ err = s.backend.WriteBlock(m.Login, m.Id, shareDiff, t.Difficulty.Int64(), t.Height, nonceHex, hashNoNonce, mixDigest, s.hashrateExpiration)
+ if err != nil {
+ log.Printf("Failed to insert block candidate into backend: %v", err)
+ } else {
+ log.Printf("Inserted block %v to backend", t.Height)
+ }
+ log.Printf("Block with nonce: %v found by miner %v@%v at height: %d", nonceHex, m.Login, m.IP, t.Height)
+ }
+ } else {
+ exist, err := s.backend.WriteShare(m.Login, m.Id, nonceHex, mixDigest, t.Height, shareDiff, s.hashrateExpiration)
+ if exist {
+ return true, false
+ }
+ if err != nil {
+ log.Printf("Failed to insert share data into backend: %v", err)
+ }
+ }
+ return false, true
+}
diff --git a/proxy/proto.go b/proxy/proto.go
new file mode 100644
index 0000000..b72bc7d
--- /dev/null
+++ b/proxy/proto.go
@@ -0,0 +1,26 @@
+package proxy
+
+import "encoding/json"
+
+type JSONRpcReq struct {
+ Id *json.RawMessage `json:"id"`
+ Method string `json:"method"`
+ Params *json.RawMessage `json:"params"`
+}
+
+type JSONRpcResp struct {
+ Id *json.RawMessage `json:"id"`
+ Version string `json:"jsonrpc"`
+ Result interface{} `json:"result"`
+ Error interface{} `json:"error,omitempty"`
+}
+
+type SubmitReply struct {
+ Status string `json:"status"`
+}
+
+type ErrorReply struct {
+ Code int `json:"code"`
+ Message string `json:"message"`
+ close bool
+}
diff --git a/proxy/proxy.go b/proxy/proxy.go
new file mode 100644
index 0000000..de77381
--- /dev/null
+++ b/proxy/proxy.go
@@ -0,0 +1,291 @@
+package proxy
+
+import (
+ "encoding/json"
+ "github.com/gorilla/mux"
+ "io"
+ "log"
+ "net"
+ "net/http"
+ "sync/atomic"
+ "time"
+
+ "../policy"
+ "../rpc"
+ "../storage"
+ "../util"
+)
+
+type ProxyServer struct {
+ config *Config
+ blockTemplate atomic.Value
+ upstream int32
+ upstreams []*rpc.RPCClient
+ backend *storage.RedisClient
+ diff string
+ policy *policy.PolicyServer
+ hashrateExpiration time.Duration
+ failsCount int64
+}
+
+type Session struct {
+ ip string
+ enc *json.Encoder
+}
+
+func NewProxy(cfg *Config, backend *storage.RedisClient) *ProxyServer {
+ if len(cfg.Name) == 0 {
+ log.Fatal("You must set instance name")
+ }
+ policy := policy.Start(&cfg.Proxy.Policy, backend)
+
+ proxy := &ProxyServer{config: cfg, backend: backend, policy: policy}
+ proxy.diff = util.GetTargetHex(cfg.Proxy.Difficulty)
+
+ proxy.upstreams = make([]*rpc.RPCClient, len(cfg.Upstream))
+ for i, v := range cfg.Upstream {
+ proxy.upstreams[i] = rpc.NewRPCClient(v.Name, v.Url, v.Timeout)
+ log.Printf("Upstream: %s => %s", v.Name, v.Url)
+ }
+ log.Printf("Default upstream: %s => %s", proxy.rpc().Name, proxy.rpc().Url)
+
+ proxy.fetchBlockTemplate()
+
+ proxy.hashrateExpiration, _ = time.ParseDuration(cfg.Proxy.HashrateExpiration)
+
+ refreshIntv, _ := time.ParseDuration(cfg.Proxy.BlockRefreshInterval)
+ refreshTimer := time.NewTimer(refreshIntv)
+ log.Printf("Set block refresh every %v", refreshIntv)
+
+ checkIntv, _ := time.ParseDuration(cfg.UpstreamCheckInterval)
+ checkTimer := time.NewTimer(checkIntv)
+
+ stateUpdateIntv, _ := time.ParseDuration(cfg.Proxy.StateUpdateInterval)
+ stateUpdateTimer := time.NewTimer(stateUpdateIntv)
+
+ go func() {
+ for {
+ select {
+ case <-refreshTimer.C:
+ proxy.fetchBlockTemplate()
+ refreshTimer.Reset(refreshIntv)
+ }
+ }
+ }()
+
+ go func() {
+ for {
+ select {
+ case <-checkTimer.C:
+ proxy.checkUpstreams()
+ checkTimer.Reset(checkIntv)
+ }
+ }
+ }()
+
+ go func() {
+ for {
+ select {
+ case <-stateUpdateTimer.C:
+ t := proxy.currentBlockTemplate()
+ if t != nil {
+ err := backend.WriteNodeState(cfg.Name, t.Height, t.Difficulty)
+ if err != nil {
+ log.Printf("Failed to write node state to backend: %v", err)
+ proxy.markSick()
+ } else {
+ proxy.markOk()
+ }
+ }
+ stateUpdateTimer.Reset(stateUpdateIntv)
+ }
+ }
+ }()
+
+ return proxy
+}
+
+func (s *ProxyServer) Start() {
+ log.Printf("Starting proxy on %v", s.config.Proxy.Listen)
+ r := mux.NewRouter()
+ r.Handle("/miner/{login:0x[0-9a-f]{40}}/{id:[0-9a-zA-Z\\-\\_]{1,8}}", s)
+ r.Handle("/miner/{login:0x[0-9a-f]{40}}", s)
+ srv := &http.Server{
+ Addr: s.config.Proxy.Listen,
+ Handler: r,
+ MaxHeaderBytes: s.config.Proxy.LimitHeadersSize,
+ }
+ err := srv.ListenAndServe()
+ if err != nil {
+ log.Fatalf("Failed to start proxy: %v", err)
+ }
+}
+
+func (s *ProxyServer) rpc() *rpc.RPCClient {
+ i := atomic.LoadInt32(&s.upstream)
+ return s.upstreams[i]
+}
+
+func (s *ProxyServer) checkUpstreams() {
+ candidate := int32(0)
+ backup := false
+
+ for i, v := range s.upstreams {
+ if v.Check() && !backup {
+ candidate = int32(i)
+ backup = true
+ }
+ }
+
+ if s.upstream != candidate {
+ log.Printf("Switching to %v upstream", s.upstreams[candidate].Name)
+ atomic.StoreInt32(&s.upstream, candidate)
+ }
+}
+
+func (s *ProxyServer) ServeHTTP(w http.ResponseWriter, r *http.Request) {
+ if r.Method != "POST" {
+ s.writeError(w, 405, "rpc: POST method required, received "+r.Method)
+ return
+ }
+ ip := s.remoteAddr(r)
+ s.handleClient(w, r, ip)
+}
+
+func (s *ProxyServer) remoteAddr(r *http.Request) string {
+ if s.config.Proxy.BehindReverseProxy {
+ ip := r.Header.Get("X-Forwarded-For")
+ if len(ip) > 0 && net.ParseIP(ip) != nil {
+ return ip
+ }
+ }
+ ip, _, _ := net.SplitHostPort(r.RemoteAddr)
+ return ip
+}
+
+func (s *ProxyServer) handleClient(w http.ResponseWriter, r *http.Request, ip string) {
+ if r.ContentLength > s.config.Proxy.LimitBodySize {
+ log.Printf("Socket flood from %s", ip)
+ s.policy.ApplyMalformedPolicy(ip)
+ r.Close = true
+ http.Error(w, "Request too large", http.StatusExpectationFailed)
+ return
+ }
+ r.Body = http.MaxBytesReader(w, r.Body, s.config.Proxy.LimitBodySize)
+ defer r.Body.Close()
+
+ cs := &Session{ip: ip, enc: json.NewEncoder(w)}
+ dec := json.NewDecoder(r.Body)
+ for {
+ var req JSONRpcReq
+ if err := dec.Decode(&req); err == io.EOF {
+ break
+ } else if err != nil {
+ log.Printf("Malformed request from %v: %v", ip, err)
+ s.policy.ApplyMalformedPolicy(ip)
+ r.Close = true
+ return
+ }
+ cs.handleMessage(s, r, &req)
+ }
+}
+
+func (cs *Session) handleMessage(s *ProxyServer, r *http.Request, req *JSONRpcReq) {
+ if req.Id == nil {
+ log.Printf("Missing RPC id from %s", cs.ip)
+ s.policy.ApplyMalformedPolicy(cs.ip)
+ r.Close = true
+ return
+ }
+
+ vars := mux.Vars(r)
+
+ if !s.policy.ApplyLoginPolicy(vars["login"], cs.ip) {
+ errReply := &ErrorReply{Code: -1, Message: "You are blacklisted", close: true}
+ cs.sendError(req.Id, errReply)
+ return
+ }
+
+ // Handle RPC methods
+ switch req.Method {
+ case "eth_getWork":
+ reply, errReply := s.handleGetWorkRPC(cs, vars["login"], vars["id"])
+ if errReply != nil {
+ r.Close = errReply.close
+ cs.sendError(req.Id, errReply)
+ break
+ }
+ cs.sendResult(req.Id, &reply)
+ case "eth_submitWork":
+ if req.Params != nil {
+ var params []string
+ err := json.Unmarshal(*req.Params, ¶ms)
+ if err != nil {
+ log.Printf("Unable to parse params from %v", cs.ip)
+ s.policy.ApplyMalformedPolicy(cs.ip)
+ r.Close = true
+ break
+ }
+ reply, errReply := s.handleSubmitRPC(cs, vars["login"], vars["id"], params)
+ if errReply != nil {
+ r.Close = errReply.close
+ err = cs.sendError(req.Id, errReply)
+ break
+ }
+ cs.sendResult(req.Id, &reply)
+ } else {
+ r.Close = true
+ errReply := &ErrorReply{Code: -1, Message: "Malformed request"}
+ cs.sendError(req.Id, errReply)
+ }
+ case "eth_getBlockByNumber":
+ reply := s.handleGetBlockByNumberRPC()
+ cs.sendResult(req.Id, reply)
+ case "eth_submitHashrate":
+ cs.sendResult(req.Id, true)
+ default:
+ r.Close = true
+ errReply := s.handleUnknownRPC(cs, req)
+ cs.sendError(req.Id, errReply)
+ }
+}
+
+func (cs *Session) sendResult(id *json.RawMessage, result interface{}) error {
+ message := JSONRpcResp{Id: id, Version: "2.0", Error: nil, Result: result}
+ return cs.enc.Encode(&message)
+}
+
+func (cs *Session) sendError(id *json.RawMessage, reply *ErrorReply) error {
+ message := JSONRpcResp{Id: id, Version: "2.0", Error: reply}
+ return cs.enc.Encode(&message)
+}
+
+func (s *ProxyServer) writeError(w http.ResponseWriter, status int, msg string) {
+ w.WriteHeader(status)
+ w.Header().Set("Content-Type", "text/plain; charset=utf-8")
+}
+
+func (s *ProxyServer) currentBlockTemplate() *BlockTemplate {
+ t := s.blockTemplate.Load()
+ if t != nil {
+ return t.(*BlockTemplate)
+ } else {
+ return nil
+ }
+}
+
+func (s *ProxyServer) markSick() {
+ atomic.AddInt64(&s.failsCount, 1)
+}
+
+func (s *ProxyServer) isSick() bool {
+ x := atomic.LoadInt64(&s.failsCount)
+ if s.config.Proxy.HealthCheck && x >= s.config.Proxy.MaxFails {
+ return true
+ }
+ return false
+}
+
+func (s *ProxyServer) markOk() {
+ atomic.StoreInt64(&s.failsCount, 0)
+}
diff --git a/rpc/rpc.go b/rpc/rpc.go
new file mode 100644
index 0000000..e3f7ee3
--- /dev/null
+++ b/rpc/rpc.go
@@ -0,0 +1,251 @@
+package rpc
+
+import (
+ "bytes"
+ "encoding/json"
+ "errors"
+ "io/ioutil"
+ "net/http"
+ "sync"
+ "time"
+)
+
+type RPCClient struct {
+ sync.RWMutex
+ Url string
+ Name string
+ sick bool
+ sickRate int
+ successRate int
+ client *http.Client
+}
+
+type GetBlockReply struct {
+ Number string `json:"number"`
+ Hash string `json:"hash"`
+ ParentHash string `json:"parentHash"`
+ Nonce string `json:"nonce"`
+ Sha3Uncles string `json:"sha3Uncles"`
+ LogsBloom string `json:"logsBloom"`
+ TransactionsRoot string `json:"transactionsRoot"`
+ StateRoot string `json:"stateRoot"`
+ Miner string `json:"miner"`
+ Difficulty string `json:"difficulty"`
+ TotalDifficulty string `json:"totalDifficulty"`
+ Size string `json:"size"`
+ ExtraData string `json:"extraData"`
+ GasLimit string `json:"gasLimit"`
+ GasUsed string `json:"gasUsed"`
+ Timestamp string `json:"timestamp"`
+ Transactions []Tx `json:"transactions"`
+ Uncles []string `json:"uncles"`
+}
+
+type GetBlockReplyPart struct {
+ Number string `json:"number"`
+ Difficulty string `json:"difficulty"`
+}
+
+type TxReceipt struct {
+ TxHash string `json:"transactionHash"`
+ GasUsed string `json:"gasUsed"`
+}
+
+type Tx struct {
+ Gas string `json:"gas"`
+ GasPrice string `json:"gasPrice"`
+ Hash string `json:"hash"`
+}
+
+type JSONRpcResp struct {
+ Id *json.RawMessage `json:"id"`
+ Result *json.RawMessage `json:"result"`
+ Error map[string]interface{} `json:"error"`
+}
+
+func NewRPCClient(name, url, timeout string) *RPCClient {
+ rpcClient := &RPCClient{Name: name, Url: url}
+ timeoutIntv, _ := time.ParseDuration(timeout)
+ rpcClient.client = &http.Client{
+ Timeout: timeoutIntv,
+ }
+ return rpcClient
+}
+
+func (r *RPCClient) GetWork() ([]string, error) {
+ rpcResp, err := r.doPost(r.Url, "eth_getWork", []string{})
+ var reply []string
+ if err != nil {
+ return reply, err
+ }
+ if rpcResp.Error != nil {
+ return reply, errors.New(rpcResp.Error["message"].(string))
+ }
+
+ err = json.Unmarshal(*rpcResp.Result, &reply)
+ // Handle empty result, daemon is catching up (geth bug!!!)
+ if len(reply) != 3 || len(reply[0]) == 0 {
+ return reply, errors.New("Daemon is not ready")
+ }
+ return reply, err
+}
+
+func (r *RPCClient) GetPendingBlock() (*GetBlockReplyPart, error) {
+ rpcResp, err := r.doPost(r.Url, "eth_getBlockByNumber", []interface{}{"pending", false})
+ var reply *GetBlockReplyPart
+ if err != nil {
+ return reply, err
+ }
+ if rpcResp.Error != nil {
+ return reply, errors.New(rpcResp.Error["message"].(string))
+ }
+ if rpcResp.Result != nil {
+ err = json.Unmarshal(*rpcResp.Result, &reply)
+ }
+ return reply, err
+}
+
+func (r *RPCClient) GetBlockByHeight(height int64) (*GetBlockReply, error) {
+ params := []interface{}{height, true}
+ return r.getBlockBy("eth_getBlockByNumber", params)
+}
+
+func (r *RPCClient) GetBlockByHash(hash string) (*GetBlockReply, error) {
+ params := []interface{}{hash, true}
+ return r.getBlockBy("eth_getBlockByHash", params)
+}
+
+func (r *RPCClient) getBlockByHeight(params []interface{}) (*GetBlockReply, error) {
+ return r.getBlockBy("eth_getBlockByNumber", params)
+}
+
+func (r *RPCClient) GetUncleByBlockNumberAndIndex(height int64, index int) (*GetBlockReply, error) {
+ params := []interface{}{height, index}
+ return r.getBlockBy("eth_getUncleByBlockNumberAndIndex", params)
+}
+
+func (r *RPCClient) getBlockBy(method string, params []interface{}) (*GetBlockReply, error) {
+ rpcResp, err := r.doPost(r.Url, method, params)
+ var reply *GetBlockReply
+ if err != nil {
+ return reply, err
+ }
+ if rpcResp.Error != nil {
+ return reply, errors.New(rpcResp.Error["message"].(string))
+ }
+ if rpcResp.Result != nil {
+ err = json.Unmarshal(*rpcResp.Result, &reply)
+ }
+ return reply, err
+}
+
+func (r *RPCClient) GetTxReceipt(hash string) (*TxReceipt, error) {
+ rpcResp, err := r.doPost(r.Url, "eth_getTransactionReceipt", []string{hash})
+ var reply *TxReceipt
+ if err != nil {
+ return nil, err
+ }
+ if rpcResp.Error != nil {
+ return nil, errors.New(rpcResp.Error["message"].(string))
+ }
+ if rpcResp.Result != nil {
+ err = json.Unmarshal(*rpcResp.Result, &reply)
+ }
+ return reply, err
+}
+
+func (r *RPCClient) SubmitBlock(params []string) (bool, error) {
+ rpcResp, err := r.doPost(r.Url, "eth_submitWork", params)
+ var result bool
+ if err != nil {
+ return false, err
+ }
+ err = json.Unmarshal(*rpcResp.Result, &result)
+ if !result {
+ return false, errors.New("Block not accepted, result=false")
+ }
+ return result, nil
+}
+
+func (r *RPCClient) SendTransaction(from, to, gas, gasPrice, value string) (string, error) {
+ params := map[string]string{
+ "from": from,
+ "to": to,
+ "gas": gas,
+ "gasPrice": gasPrice,
+ "value": value,
+ }
+ rpcResp, err := r.doPost(r.Url, "eth_sendTransaction", []interface{}{params})
+ var reply string
+ if err != nil {
+ return reply, err
+ }
+ if rpcResp.Error != nil {
+ return reply, errors.New(rpcResp.Error["message"].(string))
+ }
+ err = json.Unmarshal(*rpcResp.Result, &reply)
+ return reply, err
+}
+
+func (r *RPCClient) doPost(url string, method string, params interface{}) (JSONRpcResp, error) {
+ jsonReq := map[string]interface{}{"jsonrpc": "2.0", "method": method, "params": params, "id": 0}
+ data, _ := json.Marshal(jsonReq)
+
+ req, err := http.NewRequest("POST", url, bytes.NewBuffer(data))
+ req.Header.Set("Content-Length", (string)(len(data)))
+ req.Header.Set("Content-Type", "application/json")
+ req.Header.Set("Accept", "application/json")
+
+ resp, err := r.client.Do(req)
+ var rpcResp JSONRpcResp
+
+ if err != nil {
+ r.markSick()
+ return rpcResp, err
+ }
+ defer resp.Body.Close()
+
+ body, _ := ioutil.ReadAll(resp.Body)
+ err = json.Unmarshal(body, &rpcResp)
+
+ if rpcResp.Error != nil {
+ r.markSick()
+ }
+ return rpcResp, err
+}
+
+func (r *RPCClient) Check() bool {
+ _, err := r.GetWork()
+ if err != nil {
+ return false
+ }
+ r.markAlive()
+ return !r.Sick()
+}
+
+func (r *RPCClient) Sick() bool {
+ r.RLock()
+ defer r.RUnlock()
+ return r.sick
+}
+
+func (r *RPCClient) markSick() {
+ r.Lock()
+ r.sickRate++
+ r.successRate = 0
+ if r.sickRate >= 5 {
+ r.sick = true
+ }
+ r.Unlock()
+}
+
+func (r *RPCClient) markAlive() {
+ r.Lock()
+ r.successRate++
+ if r.successRate >= 5 {
+ r.sick = false
+ r.sickRate = 0
+ r.successRate = 0
+ }
+ r.Unlock()
+}
diff --git a/storage/redis.go b/storage/redis.go
new file mode 100644
index 0000000..7ddc06b
--- /dev/null
+++ b/storage/redis.go
@@ -0,0 +1,754 @@
+package storage
+
+import (
+ "fmt"
+ "math/big"
+ "strconv"
+ "strings"
+ "time"
+
+ "github.com/ethereum/go-ethereum/common"
+ "gopkg.in/redis.v3"
+
+ "../util"
+)
+
+type Config struct {
+ Endpoint string `json:"endpoint"`
+ Password string `json:"password"`
+ Database int64 `json:"database"`
+ PoolSize int `json:"poolSize"`
+}
+
+type RedisClient struct {
+ client *redis.Client
+ prefix string
+}
+
+type BlockData struct {
+ Height int64 `json:"height"`
+ Timestamp int64 `json:"timestamp"`
+ Difficulty string `json:"difficulty"`
+ TotalShares int64 `json:"shares"`
+ Uncle bool `json:"uncle"`
+ Orphan bool `json:"orphan"`
+ Hash string `json:"hash"`
+ Nonce string `json:"-"`
+ PowHash string `json:"-"`
+ MixDigest string `json:"-"`
+ Reward *big.Int `json:"-"`
+ ImmatureReward string `json:"-"`
+ RewardString string `json:"reward"`
+ candidateKey string `json:"-"`
+ immatureKey string `json:"-"`
+}
+
+func (b *BlockData) RewardInShannon() int64 {
+ reward := new(big.Int).Div(b.Reward, common.Shannon)
+ return reward.Int64()
+}
+
+func (b *BlockData) serializeHash() string {
+ if len(b.Hash) > 0 {
+ return b.Hash
+ } else {
+ return "0x0"
+ }
+}
+
+func (b *BlockData) RoundKey() string {
+ return join(b.Height, b.Hash)
+}
+
+func (b *BlockData) key() string {
+ return join(b.Uncle, b.Orphan, b.Nonce, b.serializeHash(), b.Timestamp, b.Difficulty, b.TotalShares, b.Reward)
+}
+
+type Miner struct {
+ LastBeat int64 `json:"lastBeat"`
+ HR int64 `json:"hr"`
+ Offline bool `json:"offline"`
+ startedAt int64
+}
+
+type Worker struct {
+ Miner
+ TotalHR int64 `json:"hr2"`
+}
+
+func NewRedisClient(cfg *Config, prefix string) *RedisClient {
+ client := redis.NewClient(&redis.Options{
+ Addr: cfg.Endpoint,
+ Password: cfg.Password,
+ DB: cfg.Database,
+ PoolSize: cfg.PoolSize,
+ })
+ return &RedisClient{client: client, prefix: prefix}
+}
+
+func (r *RedisClient) Client() *redis.Client {
+ return r.client
+}
+
+func (r *RedisClient) Check() (string, error) {
+ return r.client.Ping().Result()
+}
+
+// Always returns list of addresses. If Redis fails it will return empty list.
+func (r *RedisClient) GetBlacklist() ([]string, error) {
+ cmd := r.client.SMembers(r.formatKey("blacklist"))
+ if cmd.Err() != nil {
+ return []string{}, cmd.Err()
+ }
+ return cmd.Val(), nil
+}
+
+// Always returns list of IPs. If Redis fails it will return empty list.
+func (r *RedisClient) GetWhitelist() ([]string, error) {
+ cmd := r.client.SMembers(r.formatKey("whitelist"))
+ if cmd.Err() != nil {
+ return []string{}, cmd.Err()
+ }
+ return cmd.Val(), nil
+}
+
+func (r *RedisClient) WriteNodeState(id string, height uint64, diff *big.Int) error {
+ tx := r.client.Multi()
+ defer tx.Close()
+
+ now := util.MakeTimestamp() / 1000
+
+ _, err := tx.Exec(func() error {
+ tx.HSet(r.formatKey("nodes"), join(id, "name"), id)
+ tx.HSet(r.formatKey("nodes"), join(id, "height"), strconv.FormatUint(height, 10))
+ tx.HSet(r.formatKey("nodes"), join(id, "difficulty"), diff.String())
+ tx.HSet(r.formatKey("nodes"), join(id, "lastBeat"), strconv.FormatInt(now, 10))
+ return nil
+ })
+ return err
+}
+
+func (r *RedisClient) GetNodeStates() ([]map[string]interface{}, error) {
+ cmd := r.client.HGetAllMap(r.formatKey("nodes"))
+ if cmd.Err() != nil {
+ return nil, cmd.Err()
+ }
+ m := make(map[string]map[string]interface{})
+ for key, value := range cmd.Val() {
+ parts := strings.Split(key, ":")
+ if val, ok := m[parts[0]]; ok {
+ val[parts[1]] = value
+ } else {
+ node := make(map[string]interface{})
+ node[parts[1]] = value
+ m[parts[0]] = node
+ }
+ }
+ v := make([]map[string]interface{}, len(m), len(m))
+ i := 0
+ for _, value := range m {
+ v[i] = value
+ i++
+ }
+ return v, nil
+}
+
+func (r *RedisClient) WriteShare(login, id, nonce, mixDigest string, height uint64, diff int64, window time.Duration) (bool, error) {
+ // Sweep PoW backlog for previous blocks
+ r.client.ZRemRangeByScore(r.formatKey("pow"), "-inf", fmt.Sprint("(", height-8))
+ cmd := r.client.ZAdd(r.formatKey("pow"), redis.Z{Score: float64(height), Member: join(nonce, mixDigest)})
+ if cmd.Err() != nil {
+ return false, cmd.Err()
+ }
+ // Duplicate nonce
+ if cmd.Val() == 0 {
+ return true, nil
+ }
+ tx := r.client.Multi()
+ defer tx.Close()
+
+ ms := time.Now().UnixNano() / 1000000
+ ts := ms / 1000
+
+ _, err := tx.Exec(func() error {
+ r.writeShare(tx, ms, ts, login, id, diff, window)
+ tx.HIncrBy(r.formatKey("stats"), "roundShares", diff)
+ return nil
+ })
+ return false, err
+}
+
+func (r *RedisClient) WriteBlock(login, id string, diff, roundDiff int64, height uint64, nonce, powHash, mixDigest string, window time.Duration) error {
+ tx := r.client.Multi()
+ defer tx.Close()
+
+ ms := util.MakeTimestamp()
+ ts := ms / 1000
+
+ cmds, err := tx.Exec(func() error {
+ r.writeShare(tx, ms, ts, login, id, diff, window)
+ tx.HSet(r.formatKey("stats"), "lastBlockFound", strconv.FormatInt(ts, 10))
+ tx.HDel(r.formatKey("stats"), "roundShares")
+ tx.ZIncrBy(r.formatKey("finders"), 1, login)
+ tx.HIncrBy(r.formatKey("miners", login), "blocksFound", 1)
+ tx.Rename(r.formatKey("shares", "roundCurrent"), r.formatKey("shares", formatRound(height), nonce))
+ tx.HGetAllMap(r.formatKey("shares", formatRound(height), nonce))
+ return nil
+ })
+ if err != nil {
+ return err
+ } else {
+ sharesMap, _ := cmds[10].(*redis.StringStringMapCmd).Result()
+ totalShares := int64(0)
+ for _, v := range sharesMap {
+ n, _ := strconv.ParseInt(v, 10, 64)
+ totalShares += n
+ }
+ hashHex := join(nonce, powHash, mixDigest)
+ s := join(hashHex, ts, roundDiff, totalShares)
+ cmd := r.client.ZAdd(r.formatKey("blocks", "candidates"), redis.Z{Score: float64(height), Member: s})
+ return cmd.Err()
+ }
+}
+
+func (r *RedisClient) writeShare(tx *redis.Multi, ms, ts int64, login, id string, diff int64, expire time.Duration) {
+ tx.HIncrBy(r.formatKey("shares", "roundCurrent"), login, diff)
+ tx.ZAdd(r.formatKey("hashrate"), redis.Z{Score: float64(ts), Member: join(diff, login, id, ms)})
+ tx.ZAdd(r.formatKey("hashrate", login), redis.Z{Score: float64(ts), Member: join(diff, id, ms)})
+ tx.Expire(r.formatKey("hashrate", login), expire) // Will delete hashrates for miners that gone
+ tx.HSet(r.formatKey("miners", login), "lastShare", strconv.FormatInt(ts, 10))
+}
+
+func (r *RedisClient) formatKey(args ...interface{}) string {
+ return join(r.prefix, join(args...))
+}
+
+func formatRound(height uint64) string {
+ return "round" + strconv.FormatUint(height, 10)
+}
+
+func join(args ...interface{}) string {
+ s := make([]string, len(args))
+ for i, v := range args {
+ switch v.(type) {
+ case string:
+ s[i] = v.(string)
+ case int64:
+ s[i] = strconv.FormatInt(v.(int64), 10)
+ case uint64:
+ s[i] = strconv.FormatUint(v.(uint64), 10)
+ case float64:
+ s[i] = strconv.FormatFloat(v.(float64), 'f', 0, 64)
+ case bool:
+ if v.(bool) {
+ s[i] = "1"
+ } else {
+ s[i] = "0"
+ }
+ case *big.Int:
+ n := v.(*big.Int)
+ if n != nil {
+ s[i] = n.String()
+ } else {
+ s[i] = "0"
+ }
+ default:
+ panic("Invalid type specified for conversion")
+ }
+ }
+ return strings.Join(s, ":")
+}
+
+func (r *RedisClient) GetCandidates(maxHeight int64) ([]*BlockData, error) {
+ option := redis.ZRangeByScore{Min: "0", Max: strconv.FormatInt(maxHeight, 10)}
+ cmd := r.client.ZRangeByScoreWithScores(r.formatKey("blocks", "candidates"), option)
+ if cmd.Err() != nil {
+ return nil, cmd.Err()
+ }
+ return convertCandidateResults(cmd), nil
+}
+
+func (r *RedisClient) GetImmatureBlocks(maxHeight int64) ([]*BlockData, error) {
+ option := redis.ZRangeByScore{Min: "0", Max: strconv.FormatInt(maxHeight, 10)}
+ cmd := r.client.ZRangeByScoreWithScores(r.formatKey("blocks", "immature"), option)
+ if cmd.Err() != nil {
+ return nil, cmd.Err()
+ }
+ return convertBlockResults(cmd), nil
+}
+
+func (r *RedisClient) GetRoundShares(height uint64, nonce string) (map[string]int64, error) {
+ result := make(map[string]int64)
+ cmd := r.client.HGetAllMap(r.formatKey("shares", formatRound(height), nonce))
+ if cmd.Err() != nil {
+ return nil, cmd.Err()
+ }
+ sharesMap, _ := cmd.Result()
+ for login, v := range sharesMap {
+ n, _ := strconv.ParseInt(v, 10, 64)
+ result[login] = n
+ }
+ return result, nil
+}
+
+func (r *RedisClient) GetPayees() ([]string, error) {
+ var result []string
+ payees := make(map[string]bool)
+ cmd := r.client.Keys(r.formatKey("miners", "*"))
+ if cmd.Err() != nil {
+ return nil, cmd.Err()
+ }
+ for _, worker := range cmd.Val() {
+ login := strings.Split(worker, ":")[2]
+ payees[login] = true
+ }
+ for login, _ := range payees {
+ result = append(result, login)
+ }
+ return result, nil
+}
+
+func (r *RedisClient) GetBalance(login string) (int64, error) {
+ cmd := r.client.HGet(r.formatKey("miners", login), "balance")
+ if cmd.Err() != nil {
+ return 0, cmd.Err()
+ }
+ return cmd.Int64()
+}
+
+// Update balance after TX sent
+func (r *RedisClient) UpdateBalance(login, txHash string, amount int64) error {
+ tx := r.client.Multi()
+ defer tx.Close()
+
+ ts := util.MakeTimestamp() / 1000
+
+ _, err := tx.Exec(func() error {
+ tx.HIncrBy(r.formatKey("miners", login), "balance", (amount * -1))
+ tx.HIncrBy(r.formatKey("miners", login), "paid", amount)
+ tx.HIncrBy(r.formatKey("finances"), "balance", (amount * -1))
+ tx.HIncrBy(r.formatKey("finances"), "paid", amount)
+ tx.ZAdd(r.formatKey("payments", "all"), redis.Z{Score: float64(ts), Member: join(txHash, login, amount)})
+ tx.ZAdd(r.formatKey("payments", login), redis.Z{Score: float64(ts), Member: join(txHash, amount)})
+ return nil
+ })
+ return err
+}
+
+func (r *RedisClient) WriteImmatureBlock(block *BlockData, roundRewards map[string]int64) error {
+ tx := r.client.Multi()
+ defer tx.Close()
+
+ _, err := tx.Exec(func() error {
+ r.writeImmatureBlock(tx, block)
+ total := int64(0)
+ for login, amount := range roundRewards {
+ total += amount
+ tx.HIncrBy(r.formatKey("miners", login), "immature", amount)
+ tx.HSetNX(r.formatKey("credits:immature", block.Height, block.Hash), login, strconv.FormatInt(amount, 10))
+ }
+ tx.HIncrBy(r.formatKey("finances"), "immature", total)
+ return nil
+ })
+ return err
+}
+
+func (r *RedisClient) WriteMaturedBlock(block *BlockData, roundRewards map[string]int64) error {
+ creditKey := r.formatKey("credits:immature", block.Height, block.Hash)
+ tx, err := r.client.Watch(creditKey)
+ // Must decrement immatures using existing log entry
+ immatureCredits := tx.HGetAllMap(creditKey)
+ if err != nil {
+ return err
+ }
+ defer tx.Close()
+
+ ts := util.MakeTimestamp() / 1000
+ value := join(block.Hash, ts, block.Reward)
+
+ _, err = tx.Exec(func() error {
+ r.writeMaturedBlock(tx, block)
+ tx.ZAdd(r.formatKey("credits", "all"), redis.Z{Score: float64(block.Height), Member: value})
+
+ // Decrement immature balances
+ totalImmature := int64(0)
+ for login, amountString := range immatureCredits.Val() {
+ amount, _ := strconv.ParseInt(amountString, 10, 64)
+ totalImmature += amount
+ tx.HIncrBy(r.formatKey("miners", login), "immature", (amount * -1))
+ }
+
+ // Increment balances
+ total := int64(0)
+ for login, amount := range roundRewards {
+ total += amount
+ // NOTICE: Maybe expire round reward entry in 604800 (a week)?
+ tx.HIncrBy(r.formatKey("miners", login), "balance", amount)
+ tx.HSetNX(r.formatKey("credits", block.Height, block.Hash), login, strconv.FormatInt(amount, 10))
+ }
+ tx.Del(creditKey)
+ tx.HIncrBy(r.formatKey("finances"), "balance", total)
+ tx.HIncrBy(r.formatKey("finances"), "immature", (totalImmature * -1))
+ tx.HSet(r.formatKey("finances"), "lastCreditHeight", strconv.FormatInt(block.Height, 10))
+ tx.HSet(r.formatKey("finances"), "lastCreditHash", block.Hash)
+ tx.HIncrBy(r.formatKey("finances"), "totalMined", block.RewardInShannon())
+ return nil
+ })
+ return err
+}
+
+func (r *RedisClient) WriteOrphan(block *BlockData) error {
+ creditKey := r.formatKey("credits:immature", block.Height, block.Hash)
+ tx, err := r.client.Watch(creditKey)
+ // Much decrement immatures using existing log entry
+ immatureCredits := tx.HGetAllMap(creditKey)
+ if err != nil {
+ return err
+ }
+ defer tx.Close()
+
+ _, err = tx.Exec(func() error {
+ r.writeMaturedBlock(tx, block)
+
+ // Decrement immature balances
+ totalImmature := int64(0)
+ for login, amountString := range immatureCredits.Val() {
+ amount, _ := strconv.ParseInt(amountString, 10, 64)
+ totalImmature += amount
+ tx.HIncrBy(r.formatKey("miners", login), "immature", (amount * -1))
+ }
+ tx.Del(creditKey)
+ tx.HIncrBy(r.formatKey("finances"), "immature", (totalImmature * -1))
+ return nil
+ })
+ return err
+}
+
+func (r *RedisClient) WritePendingOrphans(blocks []*BlockData) error {
+ tx := r.client.Multi()
+ defer tx.Close()
+
+ _, err := tx.Exec(func() error {
+ for _, block := range blocks {
+ r.writeImmatureBlock(tx, block)
+ }
+ return nil
+ })
+ return err
+}
+
+func (r *RedisClient) writeImmatureBlock(tx *redis.Multi, block *BlockData) {
+ tx.ZRem(r.formatKey("blocks", "candidates"), block.candidateKey)
+ tx.ZAdd(r.formatKey("blocks", "immature"), redis.Z{Score: float64(block.Height), Member: block.key()})
+}
+
+func (r *RedisClient) writeMaturedBlock(tx *redis.Multi, block *BlockData) {
+ tx.Del(r.formatKey("shares", formatRound(uint64(block.Height)), block.Nonce))
+ tx.ZRem(r.formatKey("blocks", "immature"), block.immatureKey)
+ tx.ZAdd(r.formatKey("blocks", "matured"), redis.Z{Score: float64(block.Height), Member: block.key()})
+}
+
+func (r *RedisClient) GetMinerStats(login string, maxPayments int64) (map[string]interface{}, error) {
+ stats := make(map[string]interface{})
+
+ tx := r.client.Multi()
+ defer tx.Close()
+
+ cmds, err := tx.Exec(func() error {
+ tx.HGetAllMap(r.formatKey("miners", login))
+ tx.ZRevRangeWithScores(r.formatKey("payments", login), 0, maxPayments-1)
+ tx.ZCard(r.formatKey("payments", login))
+ tx.HGet(r.formatKey("shares", "roundCurrent"), login)
+ return nil
+ })
+
+ if err != nil && err != redis.Nil {
+ return nil, err
+ } else {
+ stats["stats"], _ = cmds[0].(*redis.StringStringMapCmd).Result()
+ payments := convertPaymentsResults(cmds[1].(*redis.ZSliceCmd))
+ stats["payments"] = payments
+ stats["paymentsTotal"] = cmds[2].(*redis.IntCmd).Val()
+ roundShares, _ := cmds[3].(*redis.StringCmd).Int64()
+ stats["roundShares"] = roundShares
+ }
+
+ return stats, nil
+}
+
+// WARNING: Must run it periodically to flush out of window hashrate entries
+func (r *RedisClient) FlushStaleStats(largeWindow time.Duration) (int64, error) {
+ now := util.MakeTimestamp() / 1000
+ max := fmt.Sprint("(", now-int64(largeWindow/time.Second))
+ total := int64(0)
+ n, err := r.client.ZRemRangeByScore(r.formatKey("hashrate"), "-inf", max).Result()
+ if err != nil {
+ return total, err
+ }
+ total += n
+
+ keys, err := r.client.Keys(r.formatKey("hashrate", "*")).Result()
+ if err != nil {
+ return total, err
+ }
+ for _, worker := range keys {
+ login := strings.Split(worker, ":")[2]
+ n, err = r.client.ZRemRangeByScore(r.formatKey("hashrate", login), "-inf", max).Result()
+ if err != nil {
+ return total, err
+ }
+ total += n
+ }
+ return total, nil
+}
+
+func (r *RedisClient) CollectStats(smallWindow time.Duration, maxBlocks, maxPayments int64) (map[string]interface{}, error) {
+ window := int64(smallWindow / time.Second)
+ stats := make(map[string]interface{})
+
+ tx := r.client.Multi()
+ defer tx.Close()
+
+ now := util.MakeTimestamp() / 1000
+
+ cmds, err := tx.Exec(func() error {
+ tx.ZRemRangeByScore(r.formatKey("hashrate"), "-inf", fmt.Sprint("(", now-window))
+ tx.ZRangeWithScores(r.formatKey("hashrate"), 0, -1)
+ tx.HGetAllMap(r.formatKey("stats"))
+ tx.ZRevRangeWithScores(r.formatKey("blocks", "candidates"), 0, -1)
+ tx.ZRevRangeWithScores(r.formatKey("blocks", "immature"), 0, -1)
+ tx.ZRevRangeWithScores(r.formatKey("blocks", "matured"), 0, maxBlocks-1)
+ tx.ZCard(r.formatKey("blocks", "candidates"))
+ tx.ZCard(r.formatKey("blocks", "immature"))
+ tx.ZCard(r.formatKey("blocks", "matured"))
+ tx.ZCard(r.formatKey("payments", "all"))
+ tx.ZRevRangeWithScores(r.formatKey("payments", "all"), 0, maxPayments-1)
+ return nil
+ })
+
+ if err != nil {
+ return nil, err
+ }
+
+ stats["stats"], _ = cmds[2].(*redis.StringStringMapCmd).Result()
+ candidates := convertCandidateResults(cmds[3].(*redis.ZSliceCmd))
+ stats["candidates"] = candidates
+ stats["candidatesTotal"] = cmds[6].(*redis.IntCmd).Val()
+
+ immature := convertBlockResults(cmds[4].(*redis.ZSliceCmd))
+ stats["immature"] = immature
+ stats["immatureTotal"] = cmds[7].(*redis.IntCmd).Val()
+
+ matured := convertBlockResults(cmds[5].(*redis.ZSliceCmd))
+ stats["matured"] = matured
+ stats["maturedTotal"] = cmds[8].(*redis.IntCmd).Val()
+
+ payments := convertPaymentsResults(cmds[10].(*redis.ZSliceCmd))
+ stats["payments"] = payments
+ stats["paymentsTotal"] = cmds[9].(*redis.IntCmd).Val()
+
+ totalHashrate, miners := convertMinersStats(window, cmds[1].(*redis.ZSliceCmd))
+ stats["miners"] = miners
+ stats["minersTotal"] = len(miners)
+ stats["hashrate"] = totalHashrate
+ return stats, nil
+}
+
+func (r *RedisClient) CollectWorkersStats(sWindow, lWindow time.Duration, login string) (map[string]interface{}, error) {
+ smallWindow := int64(sWindow / time.Second)
+ largeWindow := int64(lWindow / time.Second)
+ stats := make(map[string]interface{})
+
+ tx := r.client.Multi()
+ defer tx.Close()
+
+ now := util.MakeTimestamp() / 1000
+
+ cmds, err := tx.Exec(func() error {
+ tx.ZRemRangeByScore(r.formatKey("hashrate", login), "-inf", fmt.Sprint("(", now-largeWindow))
+ tx.ZRangeWithScores(r.formatKey("hashrate", login), 0, -1)
+ return nil
+ })
+
+ if err != nil {
+ return nil, err
+ }
+
+ totalHashrate := int64(0)
+ currentHashrate := int64(0)
+ online := int64(0)
+ offline := int64(0)
+ workers := convertWorkersStats(smallWindow, cmds[1].(*redis.ZSliceCmd))
+
+ for id, worker := range workers {
+ timeOnline := now - worker.startedAt
+ if timeOnline < 600 {
+ timeOnline = 600
+ }
+
+ boundary := timeOnline
+ if timeOnline >= smallWindow {
+ boundary = smallWindow
+ }
+ worker.HR = worker.HR / boundary
+
+ boundary = timeOnline
+ if timeOnline >= largeWindow {
+ boundary = largeWindow
+ }
+ worker.TotalHR = worker.TotalHR / boundary
+
+ if worker.LastBeat < (now - smallWindow/2) {
+ worker.Offline = true
+ offline++
+ } else {
+ online++
+ }
+
+ currentHashrate += worker.HR
+ totalHashrate += worker.TotalHR
+ workers[id] = worker
+ }
+ stats["workers"] = workers
+ stats["workersTotal"] = len(workers)
+ stats["workersOnline"] = online
+ stats["workersOffline"] = offline
+ stats["hashrate"] = totalHashrate
+ stats["currentHashrate"] = currentHashrate
+ return stats, nil
+}
+
+func convertCandidateResults(raw *redis.ZSliceCmd) []*BlockData {
+ var result []*BlockData
+ for _, v := range raw.Val() {
+ // "nonce:powHash:mixDigest:timestamp:diff:totalShares"
+ block := BlockData{}
+ block.Height = int64(v.Score)
+ fields := strings.Split(v.Member.(string), ":")
+ block.Nonce = fields[0]
+ block.PowHash = fields[1]
+ block.MixDigest = fields[2]
+ block.Timestamp, _ = strconv.ParseInt(fields[3], 10, 64)
+ block.Difficulty = fields[4]
+ block.TotalShares, _ = strconv.ParseInt(fields[5], 10, 64)
+ block.candidateKey = v.Member.(string)
+ result = append(result, &block)
+ }
+ return result
+}
+
+func convertBlockResults(raw *redis.ZSliceCmd) []*BlockData {
+ var result []*BlockData
+ for _, v := range raw.Val() {
+ // "uncle:orphan:nonce:blockHash:timestamp:diff:totalShares:rewardInWei"
+ block := BlockData{}
+ block.Height = int64(v.Score)
+ fields := strings.Split(v.Member.(string), ":")
+ block.Uncle, _ = strconv.ParseBool(fields[0])
+ block.Orphan, _ = strconv.ParseBool(fields[1])
+ block.Nonce = fields[2]
+ block.Hash = fields[3]
+ block.Timestamp, _ = strconv.ParseInt(fields[4], 10, 64)
+ block.Difficulty = fields[5]
+ block.TotalShares, _ = strconv.ParseInt(fields[6], 10, 64)
+ block.RewardString = fields[7]
+ block.ImmatureReward = fields[7]
+ block.immatureKey = v.Member.(string)
+ result = append(result, &block)
+ }
+ return result
+}
+
+// Build per login workers's total shares map {'rig-1': 12345, 'rig-2': 6789, ...}
+// TS => diff, id, ms
+func convertWorkersStats(window int64, raw *redis.ZSliceCmd) map[string]Worker {
+ now := util.MakeTimestamp() / 1000
+ workers := make(map[string]Worker)
+
+ for _, v := range raw.Val() {
+ parts := strings.Split(v.Member.(string), ":")
+ share, _ := strconv.ParseInt(parts[0], 10, 64)
+ id := parts[1]
+ score := int64(v.Score)
+ worker := workers[id]
+
+ // Add for large window
+ worker.TotalHR += share
+
+ // Add for small window if matches
+ if score >= now-window {
+ worker.HR += share
+ }
+
+ if worker.LastBeat < score {
+ worker.LastBeat = score
+ }
+ if worker.startedAt > score || worker.startedAt == 0 {
+ worker.startedAt = score
+ }
+ workers[id] = worker
+ }
+ return workers
+}
+
+func convertMinersStats(window int64, raw *redis.ZSliceCmd) (int64, map[string]Miner) {
+ now := util.MakeTimestamp() / 1000
+ miners := make(map[string]Miner)
+ totalHashrate := int64(0)
+
+ for _, v := range raw.Val() {
+ parts := strings.Split(v.Member.(string), ":")
+ share, _ := strconv.ParseInt(parts[0], 10, 64)
+ id := parts[1]
+ score := int64(v.Score)
+ miner := miners[id]
+ miner.HR += share
+
+ if miner.LastBeat < score {
+ miner.LastBeat = score
+ }
+ if miner.startedAt > score || miner.startedAt == 0 {
+ miner.startedAt = score
+ }
+ miners[id] = miner
+ }
+
+ for id, miner := range miners {
+ timeOnline := now - miner.startedAt
+ if timeOnline < 600 {
+ timeOnline = 600
+ }
+
+ boundary := timeOnline
+ if timeOnline >= window {
+ boundary = window
+ }
+ miner.HR = miner.HR / boundary
+
+ if miner.LastBeat < (now - window/2) {
+ miner.Offline = true
+ }
+ totalHashrate += miner.HR
+ miners[id] = miner
+ }
+ return totalHashrate, miners
+}
+
+func convertPaymentsResults(raw *redis.ZSliceCmd) []map[string]interface{} {
+ var result []map[string]interface{}
+ for _, v := range raw.Val() {
+ tx := make(map[string]interface{})
+ tx["timestamp"] = int64(v.Score)
+ fields := strings.Split(v.Member.(string), ":")
+ tx["tx"] = fields[0]
+ // Individual or whole payments row
+ if len(fields) < 3 {
+ tx["amount"], _ = strconv.ParseInt(fields[1], 10, 64)
+ } else {
+ tx["address"] = fields[1]
+ tx["amount"], _ = strconv.ParseInt(fields[2], 10, 64)
+ }
+ result = append(result, tx)
+ }
+ return result
+}
diff --git a/upstart.conf b/upstart.conf
new file mode 100644
index 0000000..3059307
--- /dev/null
+++ b/upstart.conf
@@ -0,0 +1,28 @@
+# Ether-Pool
+description "Ether-Pool"
+
+env DAEMON=/home/main/src/ether-pool/ether-pool
+env CONFIG=/home/main/src/ether-pool/config.json
+env NAME=ether-pool
+
+start on filesystem or runlevel [2345]
+stop on runlevel [!2345]
+
+setuid main
+setgid main
+
+kill signal INT
+
+respawn
+respawn limit 10 5
+umask 022
+
+pre-start script
+ test -x $DAEMON || { stop; exit 0; }
+end script
+
+# Start
+script
+ #test -f /etc/default/$NAME && . /etc/default/$NAME
+ exec $DAEMON $CONFIG
+end script
diff --git a/util/util.go b/util/util.go
new file mode 100644
index 0000000..d5692df
--- /dev/null
+++ b/util/util.go
@@ -0,0 +1,44 @@
+package util
+
+import (
+ "math/big"
+ "time"
+ "strconv"
+
+ "github.com/ethereum/go-ethereum/common"
+)
+
+var pow256 = common.BigPow(2, 256)
+
+func MakeTimestamp() int64 {
+ return time.Now().UnixNano() / int64(time.Millisecond)
+}
+
+func GetTargetHex(diff int64) string {
+ difficulty := big.NewInt(diff)
+ diff1 := new(big.Int).Div(pow256, difficulty)
+ return string(common.ToHex(diff1.Bytes()))
+}
+
+func ToHex(n int64) string {
+ return "0x0" + strconv.FormatInt(n, 16)
+}
+
+func FormatReward(reward *big.Int) string {
+ return reward.String()
+}
+
+func FormatRatReward(reward *big.Rat) string {
+ wei := new(big.Rat).SetInt(common.Ether)
+ reward = reward.Quo(reward, wei)
+ return reward.FloatString(8)
+}
+
+func StringInSlice(a string, list []string) bool {
+ for _, b := range list {
+ if b == a {
+ return true
+ }
+ }
+ return false
+}
diff --git a/www/.bowerrc b/www/.bowerrc
new file mode 100644
index 0000000..959e169
--- /dev/null
+++ b/www/.bowerrc
@@ -0,0 +1,4 @@
+{
+ "directory": "bower_components",
+ "analytics": false
+}
diff --git a/www/.editorconfig b/www/.editorconfig
new file mode 100644
index 0000000..47c5438
--- /dev/null
+++ b/www/.editorconfig
@@ -0,0 +1,34 @@
+# EditorConfig helps developers define and maintain consistent
+# coding styles between different editors and IDEs
+# editorconfig.org
+
+root = true
+
+
+[*]
+end_of_line = lf
+charset = utf-8
+trim_trailing_whitespace = true
+insert_final_newline = true
+indent_style = space
+indent_size = 2
+
+[*.js]
+indent_style = space
+indent_size = 2
+
+[*.hbs]
+insert_final_newline = false
+indent_style = space
+indent_size = 2
+
+[*.css]
+indent_style = space
+indent_size = 2
+
+[*.html]
+indent_style = space
+indent_size = 2
+
+[*.{diff,md}]
+trim_trailing_whitespace = false
diff --git a/www/.ember-cli b/www/.ember-cli
new file mode 100644
index 0000000..ee64cfe
--- /dev/null
+++ b/www/.ember-cli
@@ -0,0 +1,9 @@
+{
+ /**
+ Ember CLI sends analytics information by default. The data is completely
+ anonymous, but there are times when you might want to disable this behavior.
+
+ Setting `disableAnalytics` to true will prevent any data from being sent.
+ */
+ "disableAnalytics": false
+}
diff --git a/www/.gitignore b/www/.gitignore
new file mode 100644
index 0000000..86fceae
--- /dev/null
+++ b/www/.gitignore
@@ -0,0 +1,17 @@
+# See http://help.github.com/ignore-files/ for more about ignoring files.
+
+# compiled output
+/dist
+/tmp
+
+# dependencies
+/node_modules
+/bower_components
+
+# misc
+/.sass-cache
+/connect.lock
+/coverage/*
+/libpeerconnection.log
+npm-debug.log
+testem.log
diff --git a/www/.jshintrc b/www/.jshintrc
new file mode 100644
index 0000000..e75f719
--- /dev/null
+++ b/www/.jshintrc
@@ -0,0 +1,33 @@
+{
+ "predef": [
+ "document",
+ "window",
+ "-Promise",
+ "moment"
+ ],
+ "browser": true,
+ "boss": true,
+ "curly": true,
+ "debug": false,
+ "devel": true,
+ "eqeqeq": true,
+ "evil": true,
+ "forin": false,
+ "immed": false,
+ "laxbreak": false,
+ "newcap": true,
+ "noarg": true,
+ "noempty": false,
+ "nonew": false,
+ "nomen": false,
+ "onevar": false,
+ "plusplus": false,
+ "regexp": false,
+ "undef": true,
+ "sub": true,
+ "strict": false,
+ "white": false,
+ "eqnull": true,
+ "esnext": true,
+ "unused": true
+}
diff --git a/www/.travis.yml b/www/.travis.yml
new file mode 100644
index 0000000..66dd107
--- /dev/null
+++ b/www/.travis.yml
@@ -0,0 +1,23 @@
+---
+language: node_js
+node_js:
+ - "0.12"
+
+sudo: false
+
+cache:
+ directories:
+ - node_modules
+
+before_install:
+ - export PATH=/usr/local/phantomjs-2.0.0/bin:$PATH
+ - "npm config set spin false"
+ - "npm install -g npm@^2"
+
+install:
+ - npm install -g bower
+ - npm install
+ - bower install
+
+script:
+ - npm test
diff --git a/www/.watchmanconfig b/www/.watchmanconfig
new file mode 100644
index 0000000..5e9462c
--- /dev/null
+++ b/www/.watchmanconfig
@@ -0,0 +1,3 @@
+{
+ "ignore_dirs": ["tmp"]
+}
diff --git a/www/README.md b/www/README.md
new file mode 100644
index 0000000..5a3c03f
--- /dev/null
+++ b/www/README.md
@@ -0,0 +1,53 @@
+# Pool
+
+This README outlines the details of collaborating on this Ember application.
+A short introduction of this app could easily go here.
+
+## Prerequisites
+
+You will need the following things properly installed on your computer.
+
+* [Git](http://git-scm.com/)
+* [Node.js](http://nodejs.org/) (with NPM)
+* [Bower](http://bower.io/)
+* [Ember CLI](http://www.ember-cli.com/)
+* [PhantomJS](http://phantomjs.org/)
+
+## Installation
+
+* `git clone By using the pool you accept all possible risks related to experimental software usage.
+ Pool owner can't compensate any irreversible losses, but will do his best to prevent worst case.
+
+
ID | +Hashrate (rough, short average) | +Hashrate (accurate, long average) | +Last Share | +
---|---|---|---|
{{k}} | +{{format-hashrate v.hr}} | +{{format-hashrate v.hr2}} | +{{format-relative (seconds-to-ms v.lastBeat)}} | +
{{k}} | +{{format-hashrate v.hr}} | +{{format-hashrate v.hr2}} | +{{format-relative (seconds-to-ms v.lastBeat)}} | +
Time | +Tx ID | +Amount | +
---|---|---|
{{format-date-locale tx.timestamp}} | +{{tx.tx}} | +{{format-balance tx.amount}} | +
Usually it's just a temporal JSON-API issue and mining is not affected. Keep calm and drink Jack Daniels.
+Pool always pay full block reward including TX fees and uncle rewards.
+ + Block maturity requires up to 520 blocks. + Usually it's less indeed. + +Height | +Block Hash | +Time Found | +Variance | +Reward | +
---|---|---|---|---|
{{format-number block.height}} | ++ {{#if block.isOk}} + {{block.hash}} + {{else}} + Lost + {{/if}} + | +{{format-date-locale block.timestamp}} | ++ {{#if block.isLucky}} + {{format-number block.variance style='percent'}} + {{else}} + {{format-number block.variance style='percent'}} + {{/if}} + | ++ {{#if block.uncle}} + {{block.formatReward}} + {{else}} + {{#if block.isOk}} + {{block.formatReward}} + {{/if}} + {{/if}} + | +
Height | +Block Hash | +Time Found | +Variance | +Reward | +
---|---|---|---|---|
{{format-number block.height}} | ++ {{#if block.isOk}} + {{block.hash}} + {{else}} + Lost + {{/if}} + | +{{format-date-locale block.timestamp}} | ++ {{#if block.isLucky}} + {{format-number block.variance style='percent'}} + {{else}} + {{format-number block.variance style='percent'}} + {{/if}} + | ++ {{#if block.uncle}} + {{block.formatReward}} + {{else}} + {{#if block.isOk}} + {{block.formatReward}} + {{/if}} + {{/if}} + | +
Height | +Time Found | +Variance | +
---|---|---|
{{format-number block.height}} | +{{format-date-locale block.timestamp}} | ++ {{#if block.isLucky}} + {{format-number block.variance style='percent'}} + {{else}} + {{format-number block.variance style='percent'}} + {{/if}} + | +
In order to mine on this pool you need to have an
+ ethminer installation
+ pointed tohttp://example.net:8888/miner/YOUR_ETH_ADDRESS/RIG_ID
+
0xb85150eb365e7df0941f0cf08235f987ba91506a
.
+ rig-1
+
+ Full example:
+ ethminer -F http://example.net:8888/miner/0xb85150eb365e7df0941f0cf08235f987ba91506a/myfarm -G --farm-recheck 200
.
+ Hint: If you are compiling ethminer from latest source, please also use
+ extra --disable-submit-hashrate
option.
+
Use stable (green) release of my Ethereum Solo/Pool Mining Proxy.
+ +CPU mining is not recommended.
+By using the pool you accept all possible risks related to experimental software usage.
+ Pool owner can't compensate any irreversible losses, but will do his best to prevent worst case.
+
There is no such account you are looking for.
+Pool always pay tx fees from it's own pocket for now.
+ Total payments sent: {{model.paymentsTotal}} +Time | +Amount | +Address | +Tx ID | +
---|---|---|---|
{{format-date-locale tx.timestamp}} | +{{format-number tx.formatAmount}} | +{{tx.address}} | +{{format-tx tx.tx}} | +