Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[reconfigurator] Reject clickhouse configurations from old generations #7347

Merged
merged 44 commits into from
Feb 4, 2025
Merged
Show file tree
Hide file tree
Changes from 12 commits
Commits
Show all changes
44 commits
Select commit Hold shift + click to select a range
8138c4c
generate replica files with generation number
karencfv Jan 15, 2025
50ab66c
generation number for keepers
karencfv Jan 15, 2025
2b3f10f
parse gen number from file
karencfv Jan 16, 2025
dd0a9b1
add some tests
karencfv Jan 16, 2025
96ac8cf
Extract server context for single node
karencfv Jan 16, 2025
948fc93
check incoming generation number is larger
karencfv Jan 17, 2025
2937b66
return init context error
karencfv Jan 17, 2025
30155e0
server context refactor
karencfv Jan 17, 2025
608b911
small refactor in SMF services structure
karencfv Jan 20, 2025
0cd9654
save generation to cache
karencfv Jan 20, 2025
be1afc7
use tokio mutex
karencfv Jan 20, 2025
6a500e6
clean up
karencfv Jan 21, 2025
779a549
remove tokio mutex
karencfv Jan 23, 2025
eb69ee5
change error messages
karencfv Jan 23, 2025
df4ba8d
hold the lock
karencfv Jan 24, 2025
c36522e
poc for long running task
karencfv Jan 24, 2025
e961a60
Use tokio task for generation
karencfv Jan 24, 2025
d279711
fmt
karencfv Jan 24, 2025
ef04ac7
use task for generating config
karencfv Jan 24, 2025
7d335b6
add init_db to task
karencfv Jan 27, 2025
b6e63fa
clean up
karencfv Jan 27, 2025
d067a71
same functionality for single node
karencfv Jan 27, 2025
861875d
remove handles
karencfv Jan 28, 2025
a75ebb9
use flume channel
karencfv Jan 28, 2025
def40c2
clean up
karencfv Jan 28, 2025
810e88b
extract methods into functions
karencfv Jan 29, 2025
c59d90f
slim down the implementation
karencfv Jan 29, 2025
cb21b84
add logging
karencfv Jan 29, 2025
cbff5cd
Begin expanding funciton parameters instead of taking a function
karencfv Jan 29, 2025
33ab408
initialise oximteter client differently
karencfv Jan 30, 2025
b8954ab
watcher for generation
karencfv Jan 30, 2025
cde78c3
same for keeper
karencfv Jan 30, 2025
089a2a4
Clean up
karencfv Jan 30, 2025
56b1fcc
Separate tasks
karencfv Jan 30, 2025
169a923
fmt
karencfv Jan 30, 2025
ca6b4c3
use try_send
karencfv Feb 3, 2025
f30ed7d
clean up
karencfv Feb 3, 2025
cc0f280
restructure watch channel
karencfv Feb 3, 2025
f5f78d4
unify long running task
karencfv Feb 3, 2025
ec8c12b
implement keeper
karencfv Feb 3, 2025
5057b8e
clean up
karencfv Feb 3, 2025
81b564c
clean up generation
karencfv Feb 4, 2025
9875341
error handling and clean up
karencfv Feb 4, 2025
7a974a7
fix tests
karencfv Feb 4, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions Cargo.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

19 changes: 19 additions & 0 deletions clickhouse-admin/api/src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,7 @@ use dropshot::{
HttpError, HttpResponseCreated, HttpResponseOk,
HttpResponseUpdatedNoContent, Path, Query, RequestContext, TypedBody,
};
use omicron_common::api::external::Generation;

/// API interface for our clickhouse-admin-keeper server
///
Expand Down Expand Up @@ -42,6 +43,15 @@ pub trait ClickhouseAdminKeeperApi {
body: TypedBody<KeeperConfigurableSettings>,
) -> Result<HttpResponseCreated<KeeperConfig>, HttpError>;

/// Retrieve the generation number of a configuration
#[endpoint {
method = GET,
path = "/generation",
}]
async fn generation(
rqctx: RequestContext<Self::Context>,
) -> Result<HttpResponseOk<Generation>, HttpError>;

/// Retrieve a logically grouped information file from a keeper node.
/// This information is used internally by ZooKeeper to manage snapshots
/// and logs for consistency and recovery.
Expand Down Expand Up @@ -108,6 +118,15 @@ pub trait ClickhouseAdminServerApi {
body: TypedBody<ServerConfigurableSettings>,
) -> Result<HttpResponseCreated<ReplicaConfig>, HttpError>;

/// Retrieve the generation number of a configuration
#[endpoint {
method = GET,
path = "/generation",
}]
async fn generation(
rqctx: RequestContext<Self::Context>,
) -> Result<HttpResponseOk<Generation>, HttpError>;

/// Contains information about distributed ddl queries (ON CLUSTER clause)
/// that were executed on a cluster.
#[endpoint {
Expand Down
3 changes: 2 additions & 1 deletion clickhouse-admin/src/clickhouse_cli.rs
Original file line number Diff line number Diff line change
Expand Up @@ -93,8 +93,9 @@ impl ClickhouseCli {
pub fn new(
binary_path: Utf8PathBuf,
listen_address: SocketAddrV6,
log: Logger,
log: &Logger,
) -> Self {
let log = log.new(slog::o!("component" => "ClickhouseCli"));
Comment on lines +96 to +98
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is part of the refactoring, the logs were a bit of a mess.

Self { binary_path, listen_address, log }
}

Expand Down
7 changes: 4 additions & 3 deletions clickhouse-admin/src/clickward.rs
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,8 @@
// file, You can obtain one at https://mozilla.org/MPL/2.0/.

use clickhouse_admin_types::{
KeeperConfig, KeeperSettings, ReplicaConfig, ServerSettings,
KeeperConfig, KeeperConfigurableSettings, ReplicaConfig,
ServerConfigurableSettings,
};
use dropshot::HttpError;
use slog_error_chain::{InlineErrorChain, SlogInlineError};
Expand Down Expand Up @@ -43,7 +44,7 @@ impl Clickward {

pub fn generate_server_config(
&self,
settings: ServerSettings,
settings: ServerConfigurableSettings,
) -> Result<ReplicaConfig, ClickwardError> {
let replica_config = settings
.generate_xml_file()
Expand All @@ -54,7 +55,7 @@ impl Clickward {

pub fn generate_keeper_config(
&self,
settings: KeeperSettings,
settings: KeeperConfigurableSettings,
) -> Result<KeeperConfig, ClickwardError> {
let keeper_config = settings
.generate_xml_file()
Expand Down
246 changes: 233 additions & 13 deletions clickhouse-admin/src/context.rs
Original file line number Diff line number Diff line change
Expand Up @@ -3,26 +3,50 @@
// file, You can obtain one at https://mozilla.org/MPL/2.0/.

use crate::{ClickhouseCli, Clickward};

use anyhow::{anyhow, bail, Result};
use camino::Utf8PathBuf;
use clickhouse_admin_types::{
CLICKHOUSE_KEEPER_CONFIG_DIR, CLICKHOUSE_KEEPER_CONFIG_FILE,
CLICKHOUSE_SERVER_CONFIG_DIR, CLICKHOUSE_SERVER_CONFIG_FILE,
};
use omicron_common::address::CLICKHOUSE_TCP_PORT;
use omicron_common::api::external::Generation;
use oximeter_db::Client as OximeterClient;
use slog::Logger;
use std::fs::File;
use std::io::{BufRead, BufReader};
use std::net::SocketAddrV6;
use std::str::FromStr;
use std::sync::Arc;
use tokio::sync::Mutex;

pub struct KeeperServerContext {
clickward: Clickward,
clickhouse_cli: ClickhouseCli,
log: Logger,
pub generation: Mutex<Option<Generation>>,
}

impl KeeperServerContext {
pub fn new(clickhouse_cli: ClickhouseCli) -> Self {
let log = clickhouse_cli
.log
.new(slog::o!("component" => "KeeperServerContext"));
pub fn new(
log: &Logger,
binary_path: Utf8PathBuf,
listen_address: SocketAddrV6,
) -> Result<Self> {
let clickhouse_cli =
ClickhouseCli::new(binary_path, listen_address, log);
Comment on lines +40 to +46
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Refactor as well, there was no need to pass clickhouse_cli as a parameter, but not clickward etc.

let log = log.new(slog::o!("component" => "KeeperServerContext"));
let clickward = Clickward::new();
Self { clickward, clickhouse_cli, log }
let config_path = Utf8PathBuf::from_str(CLICKHOUSE_KEEPER_CONFIG_DIR)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit - I think all the uses of Utf8PathBuf::from_str(..) could instead be UtfPathBuf::from(..) and then not need to be .unwrap()'d.

.unwrap()
.join(CLICKHOUSE_KEEPER_CONFIG_FILE);

// If there is already a configuration file with a generation number we'll
// use that. Otherwise, we set the generation number to None.
let gen = read_generation_from_file(config_path)?;
let generation = Mutex::new(gen);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's become practice at Oxide to avoid tokio mutexes wherever possible as they have significant problems when cancelled and generally just don't do what we want. I realize there's already some usage here with regards to initialization. We don't have to fix that in this PR, but we should avoid adding new uses. We should instead use a std::sync::mutex. I left a comment below about this as well.

See the following for more details:
https://rfd.shared.oxide.computer/rfd/0400#no_mutex
https://rfd.shared.oxide.computer/rfd/0397#_example_with_mutexes

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lol I was definitely on the fence on that one, I went for consistency in the end be1afc7#diff-c816600501b7aaa7de4a2eb9dc86498662030cea6390fa23e11a22c990efb510L28-L29

Thanks for the links! Hadn't seen those RFDs, will read them both

Ok(Self { clickward, clickhouse_cli, log, generation })
}

pub fn clickward(&self) -> &Clickward {
Expand All @@ -36,6 +60,10 @@ impl KeeperServerContext {
pub fn log(&self) -> &Logger {
&self.log
}

pub async fn generation(&self) -> Option<Generation> {
*self.generation.lock().await
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We only need read access here, and so we can easily avoid an async mutex here. Generation is also Copy, so this is cheap. I'd suggest making this a synchronous function and calling *self.generation.lock() instead.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I was wrong here. I wasn't considering the usage of the generation with regards to concurrent requests.

}
}

pub struct ServerContext {
Expand All @@ -44,24 +72,40 @@ pub struct ServerContext {
oximeter_client: OximeterClient,
initialization_lock: Arc<Mutex<()>>,
log: Logger,
pub generation: Mutex<Option<Generation>>,
}

impl ServerContext {
pub fn new(clickhouse_cli: ClickhouseCli) -> Self {
let ip = clickhouse_cli.listen_address.ip();
pub fn new(
log: &Logger,
binary_path: Utf8PathBuf,
listen_address: SocketAddrV6,
) -> Result<Self> {
let clickhouse_cli =
ClickhouseCli::new(binary_path, listen_address, log);

let ip = listen_address.ip();
let address = SocketAddrV6::new(*ip, CLICKHOUSE_TCP_PORT, 0, 0);
let oximeter_client =
OximeterClient::new(address.into(), &clickhouse_cli.log);
let oximeter_client = OximeterClient::new(address.into(), log);
let clickward = Clickward::new();
let log =
clickhouse_cli.log.new(slog::o!("component" => "ServerContext"));
Self {
let log = log.new(slog::o!("component" => "ServerContext"));

let config_path = Utf8PathBuf::from_str(CLICKHOUSE_SERVER_CONFIG_DIR)
.unwrap()
.join(CLICKHOUSE_SERVER_CONFIG_FILE);

// If there is already a configuration file with a generation number we'll
// use that. Otherwise, we set the generation number to None.
let gen = read_generation_from_file(config_path)?;
let generation = Mutex::new(gen);
Ok(Self {
clickhouse_cli,
clickward,
oximeter_client,
initialization_lock: Arc::new(Mutex::new(())),
log,
}
generation,
})
}

pub fn clickhouse_cli(&self) -> &ClickhouseCli {
Expand All @@ -83,4 +127,180 @@ impl ServerContext {
pub fn log(&self) -> &Logger {
&self.log
}

pub async fn generation(&self) -> Option<Generation> {
*self.generation.lock().await
}
}

pub struct SingleServerContext {
clickhouse_cli: ClickhouseCli,
oximeter_client: OximeterClient,
initialization_lock: Arc<Mutex<()>>,
log: Logger,
}

impl SingleServerContext {
pub fn new(
log: &Logger,
binary_path: Utf8PathBuf,
listen_address: SocketAddrV6,
) -> Self {
let clickhouse_cli =
ClickhouseCli::new(binary_path, listen_address, log);

let ip = listen_address.ip();
let address = SocketAddrV6::new(*ip, CLICKHOUSE_TCP_PORT, 0, 0);
let oximeter_client = OximeterClient::new(address.into(), log);
let log =
clickhouse_cli.log.new(slog::o!("component" => "ServerContext"));

Self {
clickhouse_cli,
oximeter_client,
initialization_lock: Arc::new(Mutex::new(())),
log,
}
}

pub fn clickhouse_cli(&self) -> &ClickhouseCli {
&self.clickhouse_cli
}

pub fn oximeter_client(&self) -> &OximeterClient {
&self.oximeter_client
}

pub fn initialization_lock(&self) -> Arc<Mutex<()>> {
self.initialization_lock.clone()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure if this usage of a tokio lock is safe or not due to cancellation. It looks like it aligns with the exact usage we have in our ServerContext. I also don't have an easy workaround for this right now, and so I guess I'm fine leaving this in to keep moving.

@sunshowers @jgallagher Do you have any ideas here?

Copy link
Contributor

@jgallagher jgallagher Jan 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Various thoughts; sorry if some of this is obvious, but I don't have much context here so am just hopping in:

  • Cloning an Arc<tokio::Mutex<_>> is fine (the clone is fully at the Arc layer)
  • ... that said I don't think we need to clone here? Returning &Mutex<()> looks like it'd be okay.
  • Mutex<()> is kinda fishy and probably worthy of a comment, since typically the mutex is protecting some data. (Maybe there is one somewhere that I'm not seeing!)
  • It looks like the use of this is to prevent the /init_db endpoint from running concurrently? That is definitely not cancel safe. If dropshot were configured to cancel handlers on client disconnect, a client could start an /init_db, drop the request (unlocking the mutex), then start it again while the first one was still running.

On the last point: I think this is "fine" as long as dropshot is configured correctly (i.e., to not cancel handlers). If we wanted this to be correct even under cancellation, I'd probably move the init process into a separate tokio task and manage that either with channels or a sync mutex. Happy to expand on those ideas if it'd be helpful.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the input!

Mutex<()> is kinda fishy and probably worthy of a comment, since typically the mutex is protecting some data. (Maybe there is one somewhere that I'm not seeing!)

Tbh, I'm just moving code around that was already here. I'm not really sure what the intention was initially.

On the last point: I think this is "fine" as long as dropshot is configured correctly (i.e., to not cancel handlers). If we wanted this to be correct even under cancellation, I'd probably move the init process into a separate tokio task and manage that either with channels or a sync mutex.

That sounds like a good idea regardless of what the initial intention was. Do you mind expanding a little on those ideas? It'd definitely be helpful

Copy link
Contributor

@jgallagher jgallagher Jan 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure thing! One pattern we've used in a bunch places is to spawn a long-lived tokio task and then communicate with it via channels. This looks something like (untested and lots of details omitted):

// kinds of things we can ask the task to do
enum Request {
    DoSomeThing {
        // any inputs from us the task needs
        data: DataNeededToDoSomeThing,
        // a oneshot channel the task uses to send us the result of our request
        response: oneshot::Sender<ResultOfSomeThing>,
    },
}

// the long-lived task: loop over incoming requests and handle them
fn long_running_task(incoming: Receiver<Request>) {
    // run until the sending half of `incoming` is dropped
    while let Some(request) = incoming.recv().await {
        match request {
            Request::DoSomeThing { data, response } => {
                let result = do_some_thing(data);
                response.send(response);
            }
        }
    }
}

// our main code: one time up front, create the channel we use to talk to the inner task and spawn that task
let (inner_tx, inner_rx) = mpsc::channel(N); // picking N here can be hard
let join_handle = tokio::spawn(long_running_task(inner_rx));

// ... somewhere else, when we want the task to do something for us ...
let (response_tx, response_rx) = oneshot::channel();
inner_tx.send(Request::DoSomeThing { data, response_tx });
let result = response_rx.await;

A real example of this pattern (albeit more complex; I'm not finding any super simple ones at the moment) is in the bootstrap agent: here's where we spawn the inner task. It has a couple different channels for incoming requests, so its run loop is a tokio::select over those channels but is otherwise pretty similar to the outline above.

This pattern is nice because regardless of how many concurrent callers try to send messages to the inner task, it itself can do things serially. In my pseudocode above, if the ... somewhere else bit is an HTTP handler, even if we get a dozen concurrent requests, the inner task will process them one at a time because it's forcing serialization via the channel it's receiving on.

I really like this pattern. But it has some problems:

  • Picking the channel depth is hard. Whatever N we pick, that means up to that many callers can be waiting in line. Sometimes we don't want that at all, but tokio's mpsc channels don't allow N=0. (There are other channel implementations that do if we decide we need this.)
  • If we just use inner_tx.send(_) as in my pseudocode, even if the channel is full, that will just block until there's room, so we actually have an infinite line. This can be avoided via try_send instead, which allows us to bubble out some kind of "we're too busy for more requests" backpressure to our caller.
  • If do_some_thing() is slow, this can all compound and make everybody slow.
  • If do_some_thing() hangs, then everybody trying to send requests to the inner task hangs too. (This recently happened to us in sled-agent!)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A "build your own" variant of the above in the case where you want at most one instance of some operation is to use a sync::Mutex around a tokio task join handle. This would look something like (again untested, details omitted):

// one time up front, create a sync mutex around an optional tokio task join handle
let task_lock = sync::Mutex::new(None);

// ... somewhere else, where we want to do work ...

// acquire the lock
let mut task_lock = task_lock.lock().unwrap();

// if there's a previous task running, is it still running?
let still_running = match task_lock.as_ref() {
    Some(joinhandle) => !joinhandle.is_finished(),
    None => false,
};
if still_running {
    // return a "we're busy" error
}

// any previous task is done; start a new one
*task_lock = Some(tokio::spawn(do_some_work()));

This has its own problems; the biggest one is that we can't wait for the result of do_some_work() while holding the lock, so this really only works for background stuff that either doesn't need to return results at all, or the caller is in a position to poll us for completion at some point in the future. (In the joinhandle.is_finished() case, we can .await it to get the result of do_some_work().)

We don't use this pattern as much. One example is in installinator, where we do want to get the result of previously-completed tasks.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the write up John. I think, overall, it's probably simpler to have a long running task and issue requests that way. However, as you mentioned this has its own problems. However, we know what those problems are and we use this pattern all over sled agent.

In this case we can constraint the problem such that we only want to handle one in flight request at a time, since reconfigurator execution will retry again later anyway. I'd suggest using a flume bounded channel with a size of 0 to act as a rendezvous channel. That should give the behavior we want. We could have separate tasks for performing initialization and config writing so we don't have one block out the other.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

excellent! Thanks a bunch for the write up!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could have separate tasks for performing initialization and config writing so we don't have one block out the other.

@andrewjstone , do we really not want them to block out each other? It'd be problematic to have the db init job trying to run when the generate config one hasn't finished and vice versa no?

}

pub fn log(&self) -> &Logger {
&self.log
}
}

fn read_generation_from_file(path: Utf8PathBuf) -> Result<Option<Generation>> {
// When the configuration file does not exist yet, this means it's a new server.
// It won't have a running clickhouse server and no generation number yet.
if !path.exists() {
return Ok(None);
}

let file = File::open(&path)?;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we add context to this error? Something like

Suggested change
let file = File::open(&path)?;
let file = File::open(&path).with_context(|| format!("failed to open {path}"))?;

let reader = BufReader::new(file);
// We know the generation number is on the top of the file so we only
// need the first line.
let first_line = match reader.lines().next() {
Some(g) => g?,
// When the clickhouse configuration file exists but has no contents,
// it means something went wrong when creating the file earlier.
// We should return because something is definitely broken.
None => bail!(
"clickhouse configuration file exists at {}, but is empty",
path
),
};

let line_parts: Vec<&str> = first_line.rsplit(':').collect();
if line_parts.len() != 2 {
bail!("first line of configuration file is malformed: {}", first_line);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we add path to this error (and the other bails / anyhows below)?

}

// It's safe to unwrap since we already know `line_parts` contains two items.
let line_end_part: Vec<&str> =
line_parts.first().unwrap().split_terminator(" -->").collect();
if line_end_part.len() != 1 {
bail!("first line of configuration file is malformed: {}", first_line);
}

// It's safe to unwrap since we already know `line_end_part` contains an item.
let gen_u64: u64 = line_end_part.first().unwrap().parse().map_err(|e| {
anyhow!(
concat!(
"first line of configuration file is malformed: {}; ",
"error = {}",
),
first_line,
e
)
})?;

let gen = Generation::try_from(gen_u64)?;

Ok(Some(gen))
}

#[cfg(test)]
mod tests {
use super::read_generation_from_file;
use camino::Utf8PathBuf;
use clickhouse_admin_types::CLICKHOUSE_SERVER_CONFIG_FILE;
use omicron_common::api::external::Generation;
use std::str::FromStr;

#[test]
fn test_read_generation_from_file_success() {
let dir = Utf8PathBuf::from_str("types/testutils")
.unwrap()
.join(CLICKHOUSE_SERVER_CONFIG_FILE);
let generation = read_generation_from_file(dir).unwrap().unwrap();

assert_eq!(Generation::from(1), generation);
}

#[test]
fn test_read_generation_from_file_none() {
let dir = Utf8PathBuf::from_str("types/testutils")
.unwrap()
.join("i-dont-exist.xml");
let generation = read_generation_from_file(dir).unwrap();

assert_eq!(None, generation);
}

#[test]
fn test_read_generation_from_file_malformed_1() {
let dir = Utf8PathBuf::from_str("types/testutils")
.unwrap()
.join("malformed_1.xml");
let result = read_generation_from_file(dir);
let error = result.unwrap_err();
let root_cause = error.root_cause();

assert_eq!(
format!("{}", root_cause),
"first line of configuration file is malformed: <clickhouse>"
);
}

#[test]
fn test_read_generation_from_file_malformed_2() {
let dir = Utf8PathBuf::from_str("types/testutils")
.unwrap()
.join("malformed_2.xml");
let result = read_generation_from_file(dir);
let error = result.unwrap_err();
let root_cause = error.root_cause();

assert_eq!(
format!("{}", root_cause),
"first line of configuration file is malformed: <!-- generation:bob -->; error = invalid digit found in string"
);
}

#[test]
fn test_read_generation_from_file_malformed_3() {
let dir = Utf8PathBuf::from_str("types/testutils")
.unwrap()
.join("malformed_3.xml");
let result = read_generation_from_file(dir);
let error = result.unwrap_err();
let root_cause = error.root_cause();

assert_eq!(
format!("{}", root_cause),
"first line of configuration file is malformed: <!-- generation:2 --> -->"
);
}
}
Loading
Loading