Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(server): add getTunneltime to manager metrics #1581

Closed
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
29 changes: 29 additions & 0 deletions src/shadowbox/server/api.yml
Original file line number Diff line number Diff line change
Expand Up @@ -418,6 +418,35 @@ paths:
examples:
'0':
value: '{"bytesTransferredByUserId":{"1":1008040941,"2":5958113497,"3":752221577}}'
/metrics/tunnel-time-location
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can this be /metrics/tunnel-time? I thought you mention there was another one per access key, but I don't see it. How are we going to report by key?

Did Sander have an opinion on this?

Copy link
Contributor Author

@daniellacosse daniellacosse Oct 30, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can see both the metrics at http://127.0.0.1:9092/metrics here:

Screenshot 2024-10-30 at 9 14 21 AM

I think we should make a separate endpoint for the tunnel time report by key

Also I think Sander approved this back when the endpoint was similar, but I didn't check

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We will know better about the API once we start using it for real. So perhaps we should pause this and focus on the manager side to better understand the usage.

For instance, we need tunnel time per access key too. Will the manager be forced to issue 3 queries?
Perhaps it will be best to issue one query, maybe under /metrics/usage, that returns all the usage metrics we need. That will simplify the Manager. It will also let us deprecate the bad transfer endpoint. In the future we can just add more to the usage endpoint, instead of keeping adding new endpoints per metric.

get:
description: Returns the tunnel time per location
tags:
- Tunnel time
responses:
'200':
description: The tunnel time per location
content:
application/json:
schema:
type: array
items:
type: object
properties:
location:
type: string
asn:
type: number
as_org:
type: string
tunnel_time:
type: object
properties:
seconds:
type: number
examples:
'0':
value: '[{"location":"US","asn":7922,"as_org":"Comcast Cable Communications, LLC","tunnel_time":{"seconds":2523432}]'
/metrics/enabled:
get:
description: Returns whether metrics is being shared
Expand Down
20 changes: 18 additions & 2 deletions src/shadowbox/server/manager_metrics.spec.ts
Original file line number Diff line number Diff line change
Expand Up @@ -13,12 +13,15 @@
// limitations under the License.

import {PrometheusManagerMetrics} from './manager_metrics';
import {FakePrometheusClient} from './mocks/mocks';
import {
FakeDataBytesTransferredPrometheusClient,
FakeTunnelTimePrometheusClient,
} from './mocks/mocks';

describe('PrometheusManagerMetrics', () => {
it('getOutboundByteTransfer', async (done) => {
const managerMetrics = new PrometheusManagerMetrics(
new FakePrometheusClient({'access-key-1': 1000, 'access-key-2': 10000})
new FakeDataBytesTransferredPrometheusClient({'access-key-1': 1000, 'access-key-2': 10000})
);
const dataUsage = await managerMetrics.getOutboundByteTransfer({hours: 0});
const bytesTransferredByUserId = dataUsage.bytesTransferredByUserId;
Expand All @@ -27,4 +30,17 @@ describe('PrometheusManagerMetrics', () => {
expect(bytesTransferredByUserId['access-key-2']).toEqual(10000);
done();
});

it('getTunnelTimeByLocation', async (done) => {
daniellacosse marked this conversation as resolved.
Show resolved Hide resolved
const managerMetrics = new PrometheusManagerMetrics(
new FakeTunnelTimePrometheusClient({US: {1: 1000, 2: 1000}, CA: {3: 2000}})
);
const tunnelTime = await managerMetrics.getTunnelTimeByLocation({time_window: {seconds: 0}});
expect(tunnelTime).toEqual([
{location: 'US', asn: 1, as_org: undefined, tunnel_time: {seconds: 1000}},
{location: 'US', asn: 2, as_org: undefined, tunnel_time: {seconds: 1000}},
{location: 'CA', asn: 3, as_org: undefined, tunnel_time: {seconds: 2000}},
]);
done();
});
});
33 changes: 33 additions & 0 deletions src/shadowbox/server/manager_metrics.ts
Original file line number Diff line number Diff line change
Expand Up @@ -15,8 +15,26 @@
import {PrometheusClient} from '../infrastructure/prometheus_scraper';
import {DataUsageByUser, DataUsageTimeframe} from '../model/metrics';

interface Duration {
seconds: number;
}

interface TunnelTimeRequest {
time_window: Duration;
}

interface TunnelTimeResponseEntry {
location?: string;
asn?: number;
daniellacosse marked this conversation as resolved.
Show resolved Hide resolved
as_org?: string;
tunnel_time: Duration;
}

type TunnelTimeResponse = TunnelTimeResponseEntry[];

export interface ManagerMetrics {
getOutboundByteTransfer(timeframe: DataUsageTimeframe): Promise<DataUsageByUser>;
getTunnelTimeByLocation(request: TunnelTimeRequest): Promise<TunnelTimeResponse>;
}

// Reads manager metrics from a Prometheus instance.
Expand All @@ -40,4 +58,19 @@ export class PrometheusManagerMetrics implements ManagerMetrics {
}
return {bytesTransferredByUserId: usage};
}

async getTunnelTimeByLocation(request: TunnelTimeRequest): Promise<TunnelTimeResponse> {
const {result} = await this.prometheusClient.query(
`sum(increase(shadowsocks_tunnel_time_seconds_per_location[${request.time_window.seconds}s])) by (location, asn, asorg)`
);

return result.map((entry) => ({
location: entry.metric['location'],
asn: entry.metric['asn'] !== undefined ? parseInt(entry.metric['asn'], 10) : undefined,
as_org: entry.metric['asorg'],
tunnel_time: {
seconds: parseFloat(entry.value[1]),
},
}));
}
}
4 changes: 2 additions & 2 deletions src/shadowbox/server/manager_service.spec.ts
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ import {InMemoryConfig, JsonConfig} from '../infrastructure/json_config';
import {AccessKey, AccessKeyRepository, DataLimit} from '../model/access_key';
import {ManagerMetrics} from './manager_metrics';
import {bindService, ShadowsocksManagerService} from './manager_service';
import {FakePrometheusClient, FakeShadowsocksServer} from './mocks/mocks';
import {FakeDataBytesTransferredPrometheusClient, FakeShadowsocksServer} from './mocks/mocks';
import {AccessKeyConfigJson, ServerAccessKeyRepository} from './server_access_key';
import {ServerConfigJson} from './server_config';
import {SharedMetricsPublisher} from './shared_metrics';
Expand Down Expand Up @@ -1284,6 +1284,6 @@ function getAccessKeyRepository(): ServerAccessKeyRepository {
'hostname',
new InMemoryConfig<AccessKeyConfigJson>({accessKeys: [], nextId: 0}),
new FakeShadowsocksServer(),
new FakePrometheusClient({})
new FakeDataBytesTransferredPrometheusClient({})
);
}
22 changes: 22 additions & 0 deletions src/shadowbox/server/manager_service.ts
Original file line number Diff line number Diff line change
Expand Up @@ -75,6 +75,7 @@ interface RequestParams {
// method: string
[param: string]: unknown;
}

// Simplified request and response type interfaces containing only the
// properties we actually use, to make testing easier.
interface RequestType {
Expand Down Expand Up @@ -156,6 +157,10 @@ export function bindService(
);

apiServer.get(`${apiPrefix}/metrics/transfer`, service.getDataUsage.bind(service));
apiServer.get(
`${apiPrefix}/metrics/tunnel-time-location`,
service.getTunnelTimeByLocation.bind(service)
);
apiServer.get(`${apiPrefix}/metrics/enabled`, service.getShareMetrics.bind(service));
apiServer.put(`${apiPrefix}/metrics/enabled`, service.setShareMetrics.bind(service));

Expand Down Expand Up @@ -599,6 +604,23 @@ export class ShadowsocksManagerService {
}
}

async getTunnelTimeByLocation(req: RequestType, res: ResponseType, next: restify.Next) {
daniellacosse marked this conversation as resolved.
Show resolved Hide resolved
try {
logging.debug(`getTunnelTime request ${JSON.stringify(req.params)}`);
const response = await this.managerMetrics.getTunnelTimeByLocation({
time_window: {
seconds: 30 * 24 * 60 * 60,
},
});
res.send(HttpSuccess.OK, response);
logging.debug(`getTunnelTime response ${JSON.stringify(response)}`);
return next();
} catch (error) {
logging.error(error);
return next(new restifyErrors.InternalServerError());
}
}

getShareMetrics(req: RequestType, res: ResponseType, next: restify.Next): void {
logging.debug(`getShareMetrics request ${JSON.stringify(req.params)}`);
const response = {metricsEnabled: this.metricsPublisher.isSharingEnabled()};
Expand Down
25 changes: 24 additions & 1 deletion src/shadowbox/server/mocks/mocks.ts
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ export class FakeShadowsocksServer implements ShadowsocksServer {
}
}

export class FakePrometheusClient extends PrometheusClient {
export class FakeDataBytesTransferredPrometheusClient extends PrometheusClient {
constructor(public bytesTransferredById: {[accessKeyId: string]: number}) {
super('');
}
Expand All @@ -65,3 +65,26 @@ export class FakePrometheusClient extends PrometheusClient {
return queryResultData;
}
}

export class FakeTunnelTimePrometheusClient extends PrometheusClient {
daniellacosse marked this conversation as resolved.
Show resolved Hide resolved
constructor(public tunnelTimeByLocation: {[location: string]: {[as: number]: number}}) {
super('');
}

async query(_query: string): Promise<QueryResultData> {
const queryResultData = {result: []} as QueryResultData;
for (const location of Object.keys(this.tunnelTimeByLocation)) {
const tunnelTimeByAsn = this.tunnelTimeByLocation[location] || {};

for (const asn of Object.keys(tunnelTimeByAsn)) {
const tunnelTime = tunnelTimeByAsn[asn] || 0;

queryResultData.result.push({
metric: {location, asn},
value: [tunnelTime, `${tunnelTime}`],
});
}
}
return queryResultData;
}
}
38 changes: 21 additions & 17 deletions src/shadowbox/server/server_access_key.spec.ts
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ import {InMemoryConfig} from '../infrastructure/json_config';
import {AccessKeyId, AccessKeyRepository, DataLimit} from '../model/access_key';
import * as errors from '../model/errors';

import {FakePrometheusClient, FakeShadowsocksServer} from './mocks/mocks';
import {FakeDataBytesTransferredPrometheusClient, FakeShadowsocksServer} from './mocks/mocks';
import {AccessKeyConfigJson, ServerAccessKeyRepository} from './server_access_key';

describe('ServerAccessKeyRepository', () => {
Expand Down Expand Up @@ -337,7 +337,7 @@ describe('ServerAccessKeyRepository', () => {

it("setAccessKeyDataLimit can change a key's limit status", async (done) => {
const server = new FakeShadowsocksServer();
const prometheusClient = new FakePrometheusClient({'0': 500});
const prometheusClient = new FakeDataBytesTransferredPrometheusClient({'0': 500});
const repo = new RepoBuilder()
.prometheusClient(prometheusClient)
.shadowsocksServer(server)
Expand All @@ -361,7 +361,7 @@ describe('ServerAccessKeyRepository', () => {

it('setAccessKeyDataLimit overrides default data limit', async (done) => {
const server = new FakeShadowsocksServer();
const prometheusClient = new FakePrometheusClient({'0': 750, '1': 1250});
const prometheusClient = new FakeDataBytesTransferredPrometheusClient({'0': 750, '1': 1250});
const repo = new RepoBuilder()
.prometheusClient(prometheusClient)
.shadowsocksServer(server)
Expand Down Expand Up @@ -395,7 +395,7 @@ describe('ServerAccessKeyRepository', () => {

it('removeAccessKeyDataLimit restores a key to the default data limit', async (done) => {
const server = new FakeShadowsocksServer();
const prometheusClient = new FakePrometheusClient({'0': 500});
const prometheusClient = new FakeDataBytesTransferredPrometheusClient({'0': 500});
const repo = new RepoBuilder()
.prometheusClient(prometheusClient)
.shadowsocksServer(server)
Expand All @@ -413,7 +413,7 @@ describe('ServerAccessKeyRepository', () => {

it("setAccessKeyDataLimit can change a key's limit status", async (done) => {
const server = new FakeShadowsocksServer();
const prometheusClient = new FakePrometheusClient({'0': 500});
const prometheusClient = new FakeDataBytesTransferredPrometheusClient({'0': 500});
const repo = new RepoBuilder()
.prometheusClient(prometheusClient)
.shadowsocksServer(server)
Expand All @@ -437,7 +437,7 @@ describe('ServerAccessKeyRepository', () => {

it('setAccessKeyDataLimit overrides default data limit', async (done) => {
const server = new FakeShadowsocksServer();
const prometheusClient = new FakePrometheusClient({'0': 750, '1': 1250});
const prometheusClient = new FakeDataBytesTransferredPrometheusClient({'0': 750, '1': 1250});
const repo = new RepoBuilder()
.prometheusClient(prometheusClient)
.shadowsocksServer(server)
Expand Down Expand Up @@ -478,7 +478,7 @@ describe('ServerAccessKeyRepository', () => {

it('removeAccessKeyDataLimit restores a key to the default data limit', async (done) => {
const server = new FakeShadowsocksServer();
const prometheusClient = new FakePrometheusClient({'0': 500});
const prometheusClient = new FakeDataBytesTransferredPrometheusClient({'0': 500});
const repo = new RepoBuilder()
.prometheusClient(prometheusClient)
.shadowsocksServer(server)
Expand All @@ -496,7 +496,7 @@ describe('ServerAccessKeyRepository', () => {

it('removeAccessKeyDataLimit can restore an over-limit access key', async (done) => {
const server = new FakeShadowsocksServer();
const prometheusClient = new FakePrometheusClient({'0': 500});
const prometheusClient = new FakeDataBytesTransferredPrometheusClient({'0': 500});
const repo = new RepoBuilder()
.prometheusClient(prometheusClient)
.shadowsocksServer(server)
Expand Down Expand Up @@ -524,7 +524,7 @@ describe('ServerAccessKeyRepository', () => {

it('setDefaultDataLimit updates keys limit status', async (done) => {
const server = new FakeShadowsocksServer();
const prometheusClient = new FakePrometheusClient({'0': 500, '1': 200});
const prometheusClient = new FakeDataBytesTransferredPrometheusClient({'0': 500, '1': 200});
const repo = new RepoBuilder()
.prometheusClient(prometheusClient)
.shadowsocksServer(server)
Expand Down Expand Up @@ -568,7 +568,7 @@ describe('ServerAccessKeyRepository', () => {

it('removeDefaultDataLimit restores over-limit access keys', async (done) => {
const server = new FakeShadowsocksServer();
const prometheusClient = new FakePrometheusClient({'0': 500, '1': 100});
const prometheusClient = new FakeDataBytesTransferredPrometheusClient({'0': 500, '1': 100});
const repo = new RepoBuilder()
.prometheusClient(prometheusClient)
.shadowsocksServer(server)
Expand All @@ -592,7 +592,7 @@ describe('ServerAccessKeyRepository', () => {
});

it('enforceAccessKeyDataLimits updates keys limit status', async (done) => {
const prometheusClient = new FakePrometheusClient({
const prometheusClient = new FakeDataBytesTransferredPrometheusClient({
'0': 100,
'1': 200,
'2': 300,
Expand Down Expand Up @@ -626,7 +626,7 @@ describe('ServerAccessKeyRepository', () => {
});

it('enforceAccessKeyDataLimits respects both default and per-key limits', async (done) => {
const prometheusClient = new FakePrometheusClient({'0': 200, '1': 300});
const prometheusClient = new FakeDataBytesTransferredPrometheusClient({'0': 200, '1': 300});
const repo = new RepoBuilder()
.prometheusClient(prometheusClient)
.defaultDataLimit({bytes: 500})
Expand All @@ -650,7 +650,7 @@ describe('ServerAccessKeyRepository', () => {

it('enforceAccessKeyDataLimits enables and disables keys', async (done) => {
const server = new FakeShadowsocksServer();
const prometheusClient = new FakePrometheusClient({'0': 500, '1': 100});
const prometheusClient = new FakeDataBytesTransferredPrometheusClient({'0': 500, '1': 100});
const repo = new RepoBuilder()
.prometheusClient(prometheusClient)
.shadowsocksServer(server)
Expand All @@ -675,7 +675,7 @@ describe('ServerAccessKeyRepository', () => {

it('enforceAccessKeyDataLimits disables on exact data limit', async (done) => {
const server = new FakeShadowsocksServer();
const prometheusClient = new FakePrometheusClient({'0': 0});
const prometheusClient = new FakeDataBytesTransferredPrometheusClient({'0': 0});
const repo = new RepoBuilder()
.prometheusClient(prometheusClient)
.shadowsocksServer(server)
Expand Down Expand Up @@ -743,7 +743,11 @@ describe('ServerAccessKeyRepository', () => {

it('start periodically enforces access key data limits', async (done) => {
const server = new FakeShadowsocksServer();
const prometheusClient = new FakePrometheusClient({'0': 500, '1': 200, '2': 400});
const prometheusClient = new FakeDataBytesTransferredPrometheusClient({
'0': 500,
'1': 200,
'2': 400,
});
const repo = new RepoBuilder()
.prometheusClient(prometheusClient)
.shadowsocksServer(server)
Expand Down Expand Up @@ -818,7 +822,7 @@ class RepoBuilder {
private port_ = 12345;
private keyConfig_ = new InMemoryConfig<AccessKeyConfigJson>({accessKeys: [], nextId: 0});
private shadowsocksServer_ = new FakeShadowsocksServer();
private prometheusClient_ = new FakePrometheusClient({});
private prometheusClient_ = new FakeDataBytesTransferredPrometheusClient({});
private defaultDataLimit_;

port(port: number): RepoBuilder {
Expand All @@ -833,7 +837,7 @@ class RepoBuilder {
this.shadowsocksServer_ = shadowsocksServer;
return this;
}
prometheusClient(prometheusClient: FakePrometheusClient): RepoBuilder {
prometheusClient(prometheusClient: FakeDataBytesTransferredPrometheusClient): RepoBuilder {
this.prometheusClient_ = prometheusClient;
return this;
}
Expand Down
Loading