Skip to content

Commit

Permalink
Merge pull request #26518 from verytrap/main
Browse files Browse the repository at this point in the history
chore: remove repetitive words
  • Loading branch information
Sean Loiselle authored Apr 9, 2024
2 parents 36fdafc + 55f88ea commit 01f513f
Show file tree
Hide file tree
Showing 21 changed files with 23 additions and 23 deletions.
2 changes: 1 addition & 1 deletion doc/developer/command-and-response-binary-encoding.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ This process of adding Protobuf-based serialization support for a new Rust type
1. Define a Protobuf message type `Proto$T` (a.k.a. *the Protobuf representation of `$T`*) and compile it to Rust with [`prost`](https://github.com/tokio-rs/prost).
1. Implement a pair mappings that convert between `$T` and `Proto$T`.

If `$T` needs to be added to `mz_expr::foo::bar`, the the source code of the `mz_expr` crate needs to be adapted as follows.
If `$T` needs to be added to `mz_expr::foo::bar`, the source code of the `mz_expr` crate needs to be adapted as follows.

- `expr` - crate root folder.
- `build.rs` - contains `prost_build` instructions for compiling all `*.proto` files in the crate into `*.rs` source code.
Expand Down
2 changes: 1 addition & 1 deletion doc/developer/design/20210406_mapfilterproject.md
Original file line number Diff line number Diff line change
Expand Up @@ -90,7 +90,7 @@ By default, the `csv.rs` source will form a `Datum::String` for each of the pres
The source implementation can
1. use `mfp.demand()` to determine which columns are required by the `mfp` instance,
2. use `mfp.permute()` to change the column references to point to a new, dense set of column identifiers, and then
3. only assemble values for those referenced columns and apply `mfp` to the the result.
3. only assemble values for those referenced columns and apply `mfp` to the result.

This idiom is especially valuable in situations where input columns may be more complicated, for example Avro metadata, which can involve expensive decoding to prepare, but which can be entirely skipped if none of the contents are referenced.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ The selected strategy offers flexibility due to the separation of the serializab
3. define `$T ⇔ Proto$T` for each `$T`.
2. Requires the most amount of maintenance work, as the same boilerplate needs to be added for each new type `$T` used by the API.
3. Accumulates technical debt, as the `$T ⇔ Proto$T` for complex type is coing to be recursive and therefore susceptible to stack overflow issues (see #9000).
4. We have to pay the runtime and memory penalty if mediating between `$T` and `Proto$T`, possibly in the the hot paths of some processes.
4. We have to pay the runtime and memory penalty if mediating between `$T` and `Proto$T`, possibly in the hot paths of some processes.

## Alternatives

Expand Down
2 changes: 1 addition & 1 deletion doc/developer/design/20230903_avro_doc.md
Original file line number Diff line number Diff line change
Expand Up @@ -232,7 +232,7 @@ consider the following sources of documentation in order:
* A `DOC ON TYPE` comment naming `v`.
* A comment on materialized view `v`.

Changes made to comments *after* the the sink is created will not be reflected
Changes made to comments *after* the sink is created will not be reflected
in the Avro schema.

### User-facing documentation
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -149,7 +149,7 @@ environment will be:
enable the tested feature in the `CREATE CLUSTER` definition.
3. Create an `UNBILLED` replica for that cluster.
4. Ask the customer to replicate (a subset of) dataflow-backed catalog items
defined on the the original cluster to the experiment cluster.
defined on the original cluster to the experiment cluster.
5. Monitor and record observed differences between the dataflows running in the
original cluster and the (modified) dataflows running in the experiment
cluster.
Expand All @@ -169,7 +169,7 @@ Remember to document any dependencies that may need to break or change as a
result of this work.
-->

In order to facilitate this workflow, we propose the the following changes
In order to facilitate this workflow, we propose the following changes
(discussed in detail below):

- Extensions to the `CREATE CLUSTER` syntax.
Expand Down
2 changes: 1 addition & 1 deletion doc/developer/platform/ux.md
Original file line number Diff line number Diff line change
Expand Up @@ -295,7 +295,7 @@ The HTTPS server exposes two APIs:
SQL string to execute.

- Requests may contain multiple SQL statements separated by semicolons.
- Statements are processed as if they were supplied via the the [Simple
- Statements are processed as if they were supplied via the [Simple
Query][simple-query] flow of the PostgreSQL protocol.
- Statements do not support parameters

Expand Down
2 changes: 1 addition & 1 deletion doc/user/content/ingest-data/postgres/amazon-rds.md
Original file line number Diff line number Diff line change
Expand Up @@ -191,7 +191,7 @@ see the [Terraform module repository](https://github.com/MaterializeInc/terrafor
and **5432** and select the target group you created in the previous
step.
1. In the security group of your RDS instance, [allow traffic from the the network load balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/target-group-register-targets.html).
1. In the security group of your RDS instance, [allow traffic from the network load balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/target-group-register-targets.html).
If [client IP preservation](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/load-balancer-target-groups.html#client-ip-preservation)
is disabled, the easiest approach is to add an inbound rule with the VPC
Expand Down
2 changes: 1 addition & 1 deletion src/adapter/src/config/frontend.rs
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ pub struct SystemParameterFrontend {
/// This scopes down queries to a specific key.
ld_ctx: ld::Context,
/// A map from parameter names to LaunchDarkly feature keys
/// to use when populating the the [SynchronizedParameters]
/// to use when populating the [SynchronizedParameters]
/// instance in [SystemParameterFrontend::pull].
ld_key_map: BTreeMap<String, String>,
/// Frontend metrics.
Expand Down
2 changes: 1 addition & 1 deletion src/adapter/src/config/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ pub struct SystemParameterSyncConfig {
/// The SDK key.
ld_sdk_key: String,
/// A map from parameter names to LaunchDarkly feature keys
/// to use when populating the the [SynchronizedParameters]
/// to use when populating the [SynchronizedParameters]
/// instance in [SystemParameterFrontend::pull].
ld_key_map: BTreeMap<String, String>,
}
Expand Down
2 changes: 1 addition & 1 deletion src/adapter/src/error.rs
Original file line number Diff line number Diff line change
Expand Up @@ -143,7 +143,7 @@ pub enum AdapterError {
IdleInTransactionSessionTimeout,
/// The transaction is in single-subscribe mode.
SubscribeOnlyTransaction,
/// An error occurred in the the optimizer.
/// An error occurred in the optimizer.
Optimizer(OptimizerError),
/// A query depends on items which are not allowed to be referenced from the current cluster.
UnallowedOnCluster {
Expand Down
2 changes: 1 addition & 1 deletion src/adapter/src/optimize/copy_to.rs
Original file line number Diff line number Diff line change
Expand Up @@ -286,7 +286,7 @@ impl<'s> Optimize<LocalMirPlan<Resolved<'s>>> for Optimizer {
// Set the `as_of` and `until` timestamps for the dataflow.
df_desc.set_as_of(timestamp_ctx.antichain());

// Use the the opportunity to name an `until` frontier that will prevent
// Use the opportunity to name an `until` frontier that will prevent
// work we needn't perform. By default, `until` will be
// `Antichain::new()`, which prevents no updates and is safe.
//
Expand Down
2 changes: 1 addition & 1 deletion src/adapter/src/optimize/peek.rs
Original file line number Diff line number Diff line change
Expand Up @@ -275,7 +275,7 @@ impl<'s> Optimize<LocalMirPlan<Resolved<'s>>> for Optimizer {
// Set the `as_of` and `until` timestamps for the dataflow.
df_desc.set_as_of(timestamp_ctx.antichain());

// Use the the opportunity to name an `until` frontier that will prevent
// Use the opportunity to name an `until` frontier that will prevent
// work we needn't perform. By default, `until` will be
// `Antichain::new()`, which prevents no updates and is safe.
//
Expand Down
4 changes: 2 additions & 2 deletions src/compute-types/src/plan/interpret/api.rs
Original file line number Diff line number Diff line change
Expand Up @@ -338,7 +338,7 @@ where
if res_value_new != self.ctx.bindings.get(id).unwrap().value {
// Set the change flag.
change = true;
// Update the the context entry.
// Update the context entry.
let new_entry = ContextEntry::of_let_rec(res_value_new);
self.ctx.bindings.insert(*id, new_entry);
}
Expand Down Expand Up @@ -626,7 +626,7 @@ where
if res_value_new != self.ctx.bindings.get(id).unwrap().value {
// Set the change flag.
change = true;
// Update the the context entry.
// Update the context entry.
let new_entry = ContextEntry::of_let_rec(res_value_new);
self.ctx.bindings.insert(*id, new_entry);
}
Expand Down
2 changes: 1 addition & 1 deletion src/expr/src/scalar/func.rs
Original file line number Diff line number Diff line change
Expand Up @@ -1271,7 +1271,7 @@ fn log_base_numeric<'a>(a: Datum<'a>, b: Datum<'a>) -> Result<Datum<'a>, EvalErr
.expect("reducing precision below max always succeeds");
let mut integral_check = b.clone();

// `reduce` rounds to the the context's final digit when the number of
// `reduce` rounds to the context's final digit when the number of
// digits in its argument exceeds its precision. We've contrived that to
// happen by shrinking the context's precision by 1.
cx.reduce(&mut integral_check);
Expand Down
2 changes: 1 addition & 1 deletion src/lsp-server/tests/test.rs
Original file line number Diff line number Diff line change
Expand Up @@ -124,7 +124,7 @@ mod tests {
test_query(query, None, req_client, resp_client).await;
}

/// Asserts the the server can return completions varying the context
/// Asserts the server can return completions varying the context
async fn test_completion(req_client: &mut DuplexStream, resp_client: &mut DuplexStream) {
let request = format!(
r#"{{
Expand Down
2 changes: 1 addition & 1 deletion src/ore/src/netio/socket.rs
Original file line number Diff line number Diff line change
Expand Up @@ -122,7 +122,7 @@ pub struct UnixSocketAddr {
impl UnixSocketAddr {
/// Constructs a Unix domain socket address from the provided path.
///
/// Unlike the [`UnixSocketAddr::from_pathname`] method in the the standard
/// Unlike the [`UnixSocketAddr::from_pathname`] method in the standard
/// library, `path` is required to be valid UTF-8.
///
/// # Errors
Expand Down
2 changes: 1 addition & 1 deletion src/persist-client/src/internal/metrics.rs
Original file line number Diff line number Diff line change
Expand Up @@ -2776,7 +2776,7 @@ pub fn encode_ts_metric<T: Codec64>(ts: &Antichain<T>) -> i64 {
// taking advantage of the fact that in practice, timestamps in mz are
// currently always a u64 (and if we switch them, it will be to an i64).
// This means that for all values that mz would actually produce,
// interpreting the the encoded bytes as a little-endian i64 will work.
// interpreting the encoded bytes as a little-endian i64 will work.
// Both of them impl PartialOrder, so in practice, there will always be
// zero or one elements in the antichain.
match ts.elements().first() {
Expand Down
2 changes: 1 addition & 1 deletion src/repr/src/explain/tracing.rs
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ pub struct PlanTrace<T> {
/// A path of segments identifying the spans in the current ancestor-or-self
/// chain. The current path is used when accumulating new `entries`.
path: Mutex<String>,
/// The the first time when entering a span (None no span was entered yet).
/// The first time when entering a span (None no span was entered yet).
start: Mutex<Option<std::time::Instant>>,
/// A path of times at which the spans in the current ancestor-or-self chain
/// were started. The duration since the last time is used when accumulating
Expand Down
2 changes: 1 addition & 1 deletion src/repr/src/row.rs
Original file line number Diff line number Diff line change
Expand Up @@ -2079,7 +2079,7 @@ impl RowPacker<'_> {
/// packer will produce an invalid row, the unpacking of which may
/// trigger undefined behavior!
///
/// To find the byte offset of a datum boundary, inspect the the packer's
/// To find the byte offset of a datum boundary, inspect the packer's
/// byte length by calling `packer.data().len()` after pushing the desired
/// number of datums onto the packer.
pub unsafe fn truncate(&mut self, pos: usize) {
Expand Down
2 changes: 1 addition & 1 deletion src/storage-types/src/sources/mysql.rs
Original file line number Diff line number Diff line change
Expand Up @@ -338,7 +338,7 @@ impl Refines<()> for GtidState {
fn summarize(_path: Self::Summary) -> <() as Timestamp>::Summary {}
}

/// This type is used to represent the the progress of each MySQL GTID 'source_id' in the
/// This type is used to represent the progress of each MySQL GTID 'source_id' in the
/// ingestion dataflow.
///
/// A MySQL GTID consists of a source_id (UUID) and transaction_id (non-zero u64).
Expand Down
2 changes: 1 addition & 1 deletion src/transform/src/union_cancel.rs
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ impl crate::Transform for UnionBranchCancellation {
/// Result of the comparison of two branches of a union for cancellation
/// purposes.
enum BranchCmp {
/// The two branches are equivalent in the sense the the produce the
/// The two branches are equivalent in the sense the produce the
/// same exact results.
Equivalent,
/// The two branches are equivalent, but one of them produces negated
Expand Down

0 comments on commit 01f513f

Please sign in to comment.