Skip to content

Commit

Permalink
Merge branch 'master' of github.com:philipdelff/NMdata
Browse files Browse the repository at this point in the history
  • Loading branch information
philipdelff committed Apr 11, 2024
2 parents 416b70b + b236321 commit 91c2b37
Show file tree
Hide file tree
Showing 77 changed files with 3,111 additions and 1,789 deletions.
2 changes: 1 addition & 1 deletion DESCRIPTION
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
Package: NMdata
Type: Package
Title: Preparation, Checking and Post-Processing Data for PK/PD Modeling
Version: 0.1.2.913
Version: 0.1.5.902
Authors@R: c(person("Philip", "Delff", email = "[email protected]", role = c("aut", "cre")))
Maintainer: Philip Delff <[email protected]>
Description: Efficient tools for preparation, checking and post-processing of data in PK/PD (pharmacokinetics/pharmacodynamics) modeling, with focus on use of Nonmem. Attention is paid to ensure consistency, traceability, and Nonmem compatibility of Data. Rigorously checks final Nonmem datasets. Implemented in 'data.table', but easily integrated with 'base' and 'tidyverse'.
Expand Down
6 changes: 6 additions & 0 deletions NAMESPACE
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,7 @@ export(NMorderColumns)
export(NMreadCov)
export(NMreadCsv)
export(NMreadExt)
export(NMreadParsText)
export(NMreadPhi)
export(NMreadSection)
export(NMreadTab)
Expand All @@ -31,11 +32,14 @@ export(NMscanTables)
export(NMstamp)
export(NMwriteData)
export(NMwriteSection)
export(addOmegaCorr)
export(addTAPD)
export(cc)
export(cl)
export(colLabels)
export(compareCols)
export(dims)
export(dt2mat)
export(editCharCols)
export(egdt)
export(findCovs)
Expand All @@ -46,6 +50,7 @@ export(fnAppend)
export(fnExtension)
export(is.NMdata)
export(listMissings)
export(mat2dt)
export(mergeCheck)
export(renameByContents)
export(tmpcol)
Expand All @@ -56,6 +61,7 @@ importFrom(data.table,fwrite)
importFrom(data.table,setattr)
importFrom(fst,read_fst)
importFrom(fst,write_fst)
importFrom(stats,cov2cor)
importFrom(stats,reorder)
importFrom(stats,setNames)
importFrom(utils,capture.output)
Expand Down
102 changes: 95 additions & 7 deletions NEWS.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,96 @@
# 0.1.2
* Improved support for reading multiple models with NMreadExt and
NMreadPhi.
# 0.1.6

## New features
* Functions `mat2dt()` and `dt2mat()` included to convert between
matrices and data.frame format of matrix data.

* Function `addOmegaCorr` adds estimated correlation between ETAs to
parameter tables, as obtained using `NMreadExt()`.

## Bugfixes
* `NMcheckData` now respects `NMdataConf()` setting of `col.time` and
`col.id`. When using the `file` argument `col.id` was not respected
at all. This is fixed.

# 0.1.5
## New features
* `countFlags` no longer needs a table of flags. By default it will
summarize the ones found in data. If additional flags wanted in
summary table (with no findings), the flag table is still needed.

* If a flag table is provided, `countFlags` will throw an error if the
flags found in data are not covered by the provided flag table.

* `NMorderColumns` now includes arguments `col.id` and
`col.time`. These can now also be controlled using `NMdataConf()`.

* `NMreadParText` includes argument `modelname`, `col.model`, and
`as.fun` and defaults to what is defined in `NMdataConf()` like
other `NMdata` functions. It also includes a `parameter` column for
easier merge with data from e.g. `ext` files `NMreadExt()`.

* `NMreadParText` accepts function (with the control stream path as
argument) to define how to read the parameter information. This is
useful if one defines the tabulated information in a comment in the
control stream. NMreadParText basically allows for a full automation
of flexible parameter table generation.

* `NMdataConf()` is configured to handle `NMsim`'s `dir.sims` and
`dir.res`.

* `NMdataConf(reset=TRUE)` wipes all settings. In recent versions,
`NMdataConf` accepts the `allow.unknown` argument which means
settings that are unknown to `NMdata` can be stored. This is
relevant for other packages that want to make use of `NMdata`'s
configuration system (`NMsim` is an example of a package that does
so). Now `NMdataConf(reset=TRUE)` makes sure to wipe all such
configuration if exists.

# 0.1.4

## New functions
* `NMreadParsText()` is a new function to extract comments to
`$THETA`, `$OMEGA` and `$SIGMA` sections. As long as the comments
are structured in a table-like manner, `NMreadParsText()` should be
able to fetch them almost no matter what delimiters you used. Use
say `fields="%init;num)symbol/transform/label(unit)"` if you have
lines like
`(0,1) ; 1) CL / log / This is clearance (L/h)`
All comment lines don't have to be completed, and you can specify
separate formats for `$THETA`, `$OMEGA` and `$SIGMA`. Together with
`NMreadExt()` this is a very flexible basis for generating parameter
tables.

* `colLabels()` is a simple wrapper of `compareCols()` that extracts
the SAS column labels on data sets.

## New features
* NMdata functions will now by default look for input control streams
with file name extensions either `.mod` or `.ctl`. The user
previously had to tell NMdata to look for `.ctl` using configuration
options or function arguments but it will now work either way. An
error will be thrown if both should be found.

* `NMreadExt` will by default only return parameters and iterations
from the last table available. This can be controlled by the
`tableno` argument.

* `fnAppend` will now throw an error in case the file name extension
cannot be identified.

## Bugfixes
* `NMreadText` would fail to disregard some comment lines when
`keep.comments=FALSE`. Fixed.

# 0.1.3
* Better support for models with multiple estimation
steps. Particularly reading output tables now better distinguishes
between Nonmem table numbers and repetitions (like
SUBPROBLEMS). Also, functions that read parameter estimates clearly
separates Nonmem table numbers.

* Improved support for reading multiple models with NMreadExt and
NMreadPhi.

# 0.1.2
## New features
Expand Down Expand Up @@ -212,7 +300,7 @@ chaned to ensure consistent test results once data.table 1.14.7 is

## New data
* A new data set called mad is included. It is based on the
mad_missing_duplicates from the xgxr package. Doses are implemented
mad_missing_duplicates from the `xgxr` package. Doses are implemented
using ADDL and II (so only one dosing row per subject). It is
included for testing the new NMexpandDoses and addTAPD functions.

Expand Down Expand Up @@ -247,12 +335,12 @@ chaned to ensure consistent test results once data.table 1.14.7 is
set using the tz.lst argument or using NMdataConf - at least for
now.

* Checks of unique subject identifier (usubjid) included in
* Checks of unique subject identifier (`usubjid`) included in
NMcheckData. This is mostly to detect the potential issue that the
subject IDs generated for analysis are not unique across actual
subjects. If a usubjid (e.g. from clinical data sets) is included in
subjects. If a `usubjid` (e.g. from clinical data sets) is included in
data, NMcheckData can check this for basic properties and check the
analysis subject ID and the usubjid against each other.
analysis subject ID and the `usubjid` against each other.

* New function: cl - creates factors, ordered by the appearance of the
elements when created. cl("b","a") results in a factor with levels
Expand Down
21 changes: 14 additions & 7 deletions R/NMcheckData.R
Original file line number Diff line number Diff line change
Expand Up @@ -168,10 +168,10 @@
##' subject and occasion.
##'
##' \item Columns specified in cols.num must be present, numeric
##' and non-NA.
##' and non-`NA`.
##'
##' \item If a unique subject identifier column (col.usubjid) is
##' provided, col.id must be unique within values of col.usubjid and
##' \item If a unique subject identifier column (`col.usubjid`) is
##' provided, `col.id` must be unique within values of `col.usubjid` and
##' vice versa.
##'
##' \item Events should not be duplicated. For all rows, the
Expand Down Expand Up @@ -213,10 +213,10 @@ NMcheckData <- function(data,file,covs,covs.occ,cols.num,col.id="ID",
ADDL <- NULL
AMT <- NULL
CMT <- NULL
DV <- NULL
## DV <- NULL
EVID <- NULL
ID.jump <- NULL
ID <- NULL
## ID <- NULL
II <- NULL
MDVDV <- NULL
MDV <- NULL
Expand Down Expand Up @@ -266,9 +266,16 @@ NMcheckData <- function(data,file,covs,covs.occ,cols.num,col.id="ID",
if(missing(as.fun)) as.fun <- NULL
as.fun <- NMdataDecideOption("as.fun",as.fun)

if(missing(col.id)) col.id <- NULL
col.id <- NMdataDecideOption("col.id",col.id)

if(missing(col.flagn)) col.flagn <- NULL
col.flagn.orig <- col.flagn
col.flagn <- NMdataDecideOption("col.flagn",col.flagn)

if(missing(col.time)) col.time <- NULL
col.time <- NMdataDecideOption("col.time",col.time)

if(missing(file)) file <- NULL

if(!is.null(covs) && !is.character(covs)) {
Expand Down Expand Up @@ -304,7 +311,7 @@ NMcheckData <- function(data,file,covs,covs.occ,cols.num,col.id="ID",
}
}


if(!(is.character(col.dv)&&length(col.dv)==1)){
stop("col.dv must be a character and vector of length 1.")
}
Expand All @@ -330,7 +337,7 @@ NMcheckData <- function(data,file,covs,covs.occ,cols.num,col.id="ID",
### file mode
if(!is.null(file)){
if(!is.null(col.flagn.orig)){warning("col.flagn is not used when file is specified.")}
col.id <- "ID"
## col.id <- "ID"
## use.rds <- FALSE
formats.read="csv"
file.mod <- NULL
Expand Down
Loading

0 comments on commit 91c2b37

Please sign in to comment.