Soto icon

Soto

MachineLearning

Service object for interacting with AWS MachineLearning service.

public struct MachineLearning: AWSService 

Definition of the public APIs exposed by Amazon Machine Learning

Inheritance

AWSService

Initializers

init(client:region:partition:endpoint:timeout:byteBufferAllocator:options:)

Initialize the MachineLearning client

public init(
        client: AWSClient,
        region: SotoCore.Region? = nil,
        partition: AWSPartition = .aws,
        endpoint: String? = nil,
        timeout: TimeAmount? = nil,
        byteBufferAllocator: ByteBufferAllocator = ByteBufferAllocator(),
        options: AWSServiceConfig.Options = []
    ) 

Parameters

  • client: AWSClient used to process requests
  • region: Region of server you want to communicate with. This will override the partition parameter.
  • partition: AWS partition where service resides, standard (.aws), china (.awscn), government (.awsusgov).
  • endpoint: Custom endpoint URL to use instead of standard AWS servers
  • timeout: Timeout value for HTTP requests

init(from:patch:)

Initializer required by AWSService.with(middlewares:​timeout:​byteBufferAllocator:​options). You are not able to use this initializer directly as there are no public initializers for AWSServiceConfig.Patch. Please use AWSService.with(middlewares:​timeout:​byteBufferAllocator:​options) instead.

public init(from: MachineLearning, patch: AWSServiceConfig.Patch) 

Properties

client

Client used for communication with AWS

public let client: AWSClient

config

Service configuration

public let config: AWSServiceConfig

Methods

addTags(_:logger:on:)

public func addTags(_ input: AddTagsInput, logger: Logger = AWSClient.loggingDisabled, on eventLoop: EventLoop? = nil) -> EventLoopFuture<AddTagsOutput> 

Adds one or more tags to an object, up to a limit of 10. Each tag consists of a key and an optional value. If you add a tag using a key that is already associated with the ML object, AddTags updates the tag's value.

createBatchPrediction(_:logger:on:)

public func createBatchPrediction(_ input: CreateBatchPredictionInput, logger: Logger = AWSClient.loggingDisabled, on eventLoop: EventLoop? = nil) -> EventLoopFuture<CreateBatchPredictionOutput> 

Generates predictions for a group of observations. The observations to process exist in one or more data files referenced by a DataSource. This operation creates a new BatchPrediction, and uses an MLModel and the data files referenced by the DataSource as information sources.

    <p>
        <code>CreateBatchPrediction</code> is an asynchronous operation. In response to <code>CreateBatchPrediction</code>, 
    Amazon Machine Learning (Amazon ML) immediately returns and sets the <code>BatchPrediction</code> status to <code>PENDING</code>. 
    After the <code>BatchPrediction</code> completes, Amazon ML sets the status to <code>COMPLETED</code>. 
    </p>
    <p>You can poll for status updates by using the <a>GetBatchPrediction</a> operation and checking the <code>Status</code> parameter of the result. After the <code>COMPLETED</code> status appears, 
        the results are available in the location specified by the <code>OutputUri</code> parameter.</p>

createDataSourceFromRDS(_:logger:on:)

public func createDataSourceFromRDS(_ input: CreateDataSourceFromRDSInput, logger: Logger = AWSClient.loggingDisabled, on eventLoop: EventLoop? = nil) -> EventLoopFuture<CreateDataSourceFromRDSOutput> 

Creates a DataSource object from an Amazon Relational Database Service (Amazon RDS). A DataSource references data that can be used to perform CreateMLModel, CreateEvaluation, or CreateBatchPrediction operations.

    <p>
        <code>CreateDataSourceFromRDS</code> is an asynchronous operation. In response to <code>CreateDataSourceFromRDS</code>, 
     Amazon Machine Learning (Amazon ML) immediately returns and sets the <code>DataSource</code> status to <code>PENDING</code>. 
        After the <code>DataSource</code> is created and ready for use, Amazon ML sets the <code>Status</code> parameter to <code>COMPLETED</code>. 
        <code>DataSource</code> in the <code>COMPLETED</code> or <code>PENDING</code> state can
      be used only to perform <code>>CreateMLModel</code>>, <code>CreateEvaluation</code>, or <code>CreateBatchPrediction</code> operations.
    </p> 
    <p>
      If Amazon ML cannot accept the input source, it sets the <code>Status</code> parameter to <code>FAILED</code> and includes an error message in the <code>Message</code> attribute of the <code>GetDataSource</code> operation response.
    </p>

createDataSourceFromRedshift(_:logger:on:)

public func createDataSourceFromRedshift(_ input: CreateDataSourceFromRedshiftInput, logger: Logger = AWSClient.loggingDisabled, on eventLoop: EventLoop? = nil) -> EventLoopFuture<CreateDataSourceFromRedshiftOutput> 

Creates a DataSource from a database hosted on an Amazon Redshift cluster. A DataSource references data that can be used to perform either CreateMLModel, CreateEvaluation, or CreateBatchPrediction operations.

    <p>
        <code>CreateDataSourceFromRedshift</code> is an asynchronous operation. In response to <code>CreateDataSourceFromRedshift</code>, Amazon Machine Learning (Amazon ML) immediately returns and sets the <code>DataSource</code> status to <code>PENDING</code>. 
        After the <code>DataSource</code> is created and ready for use, Amazon ML sets the <code>Status</code> parameter to <code>COMPLETED</code>. 
      <code>DataSource</code> in <code>COMPLETED</code> or <code>PENDING</code> states can be
      used to perform only <code>CreateMLModel</code>, <code>CreateEvaluation</code>, or <code>CreateBatchPrediction</code> operations.
    </p> 
    <p>
      If Amazon ML can't accept the input source, it sets the <code>Status</code> parameter to <code>FAILED</code> and includes an error message in the <code>Message</code> 
	  attribute of the <code>GetDataSource</code> operation response.
    </p>
    <p>The observations should be contained in the database hosted on an Amazon Redshift cluster
        and should be specified by a <code>SelectSqlQuery</code> query. Amazon ML executes an
            <code>Unload</code> command in Amazon Redshift to transfer the result set of
            the <code>SelectSqlQuery</code> query to <code>S3StagingLocation</code>.</p>
    <p>After the <code>DataSource</code> has been created, it's ready for use in evaluations and
        batch predictions. If you plan to use the <code>DataSource</code> to train an
            <code>MLModel</code>, the <code>DataSource</code> also requires a recipe. A recipe
        describes how each input variable will be used in training an <code>MLModel</code>. Will
        the variable be included or excluded from training? Will the variable be manipulated;
        for example, will it be combined with another variable or will it be split apart into
        word combinations? The recipe provides answers to these questions.</p>
     <p>You can't change an existing datasource, but you can copy and modify the settings from an
        existing Amazon Redshift datasource to create a new datasource. To do so, call
            <code>GetDataSource</code> for an existing datasource and copy the values to a
            <code>CreateDataSource</code> call. Change the settings that you want to change and
        make sure that all required fields have the appropriate values.</p>

createDataSourceFromS3(_:logger:on:)

public func createDataSourceFromS3(_ input: CreateDataSourceFromS3Input, logger: Logger = AWSClient.loggingDisabled, on eventLoop: EventLoop? = nil) -> EventLoopFuture<CreateDataSourceFromS3Output> 

Creates a DataSource object. A DataSource references data that can be used to perform CreateMLModel, CreateEvaluation, or CreateBatchPrediction operations.

    <p>
        <code>CreateDataSourceFromS3</code> is an asynchronous operation. In response to
            <code>CreateDataSourceFromS3</code>, Amazon Machine Learning (Amazon ML) immediately
        returns and sets the <code>DataSource</code> status to <code>PENDING</code>. After the
            <code>DataSource</code> has been created and is ready for use, Amazon ML sets the
            <code>Status</code> parameter to <code>COMPLETED</code>. <code>DataSource</code> in
        the <code>COMPLETED</code> or <code>PENDING</code> state can be used to perform only
            <code>CreateMLModel</code>, <code>CreateEvaluation</code> or
            <code>CreateBatchPrediction</code> operations. </p> 
   
    <p> If Amazon ML can't accept the input source, it sets the <code>Status</code> parameter to
            <code>FAILED</code> and includes an error message in the <code>Message</code>
        attribute of the <code>GetDataSource</code> operation response. </p>
    <p>The observation data used in a <code>DataSource</code> should be ready to use; that is,
        it should have a consistent structure, and missing data values should be kept to a
        minimum. The observation data must reside in one or more .csv files in an Amazon Simple
        Storage Service (Amazon S3) location, along with a schema that describes the data items
        by name and type. The same schema must be used for all of the data files referenced by
        the <code>DataSource</code>. </p>
    <p>After the <code>DataSource</code> has been created, it's ready to use in evaluations and
        batch predictions. If you plan to use the <code>DataSource</code> to train an
            <code>MLModel</code>, the <code>DataSource</code> also needs a recipe. A recipe
        describes how each input variable will be used in training an <code>MLModel</code>. Will
        the variable be included or excluded from training? Will the variable be manipulated;
        for example, will it be combined with another variable or will it be split apart into
        word combinations? The recipe provides answers to these questions.</p>

createEvaluation(_:logger:on:)

public func createEvaluation(_ input: CreateEvaluationInput, logger: Logger = AWSClient.loggingDisabled, on eventLoop: EventLoop? = nil) -> EventLoopFuture<CreateEvaluationOutput> 

Creates a new Evaluation of an MLModel. An MLModel is evaluated on a set of observations associated to a DataSource. Like a DataSource for an MLModel, the DataSource for an Evaluation contains values for the Target Variable. The Evaluation compares the predicted result for each observation to the actual outcome and provides a summary so that you know how effective the MLModel functions on the test data. Evaluation generates a relevant performance metric, such as BinaryAUC, RegressionRMSE or MulticlassAvgFScore based on the corresponding MLModelType: BINARY, REGRESSION or MULTICLASS.

CreateEvaluation is an asynchronous operation. In response to CreateEvaluation, Amazon Machine Learning (Amazon ML) immediately returns and sets the evaluation status to PENDING. After the Evaluation is created and ready for use, Amazon ML sets the status to COMPLETED.

You can use the GetEvaluation operation to check progress of the evaluation during the creation operation.

createMLModel(_:logger:on:)

public func createMLModel(_ input: CreateMLModelInput, logger: Logger = AWSClient.loggingDisabled, on eventLoop: EventLoop? = nil) -> EventLoopFuture<CreateMLModelOutput> 

Creates a new MLModel using the DataSource and the recipe as information sources.

An MLModel is nearly immutable. Users can update only the MLModelName and the ScoreThreshold in an MLModel without creating a new MLModel.

CreateMLModel is an asynchronous operation. In response to CreateMLModel, Amazon Machine Learning (Amazon ML) immediately returns and sets the MLModel status to PENDING. After the MLModel has been created and ready is for use, Amazon ML sets the status to COMPLETED.

You can use the GetMLModel operation to check the progress of the MLModel during the creation operation.

CreateMLModel requires a DataSource with computed statistics, which can be created by setting ComputeStatistics to true in CreateDataSourceFromRDS, CreateDataSourceFromS3, or CreateDataSourceFromRedshift operations.

createRealtimeEndpoint(_:logger:on:)

public func createRealtimeEndpoint(_ input: CreateRealtimeEndpointInput, logger: Logger = AWSClient.loggingDisabled, on eventLoop: EventLoop? = nil) -> EventLoopFuture<CreateRealtimeEndpointOutput> 

Creates a real-time endpoint for the MLModel. The endpoint contains the URI of the MLModel; that is, the location to send real-time prediction requests for the specified MLModel.

deleteBatchPrediction(_:logger:on:)

public func deleteBatchPrediction(_ input: DeleteBatchPredictionInput, logger: Logger = AWSClient.loggingDisabled, on eventLoop: EventLoop? = nil) -> EventLoopFuture<DeleteBatchPredictionOutput> 

Assigns the DELETED status to a BatchPrediction, rendering it unusable.

After using the DeleteBatchPrediction operation, you can use the GetBatchPrediction operation to verify that the status of the BatchPrediction changed to DELETED.

    <p>
        <b>Caution:</b> The result of the <code>DeleteBatchPrediction</code> operation is irreversible.</p>

deleteDataSource(_:logger:on:)

public func deleteDataSource(_ input: DeleteDataSourceInput, logger: Logger = AWSClient.loggingDisabled, on eventLoop: EventLoop? = nil) -> EventLoopFuture<DeleteDataSourceOutput> 

Assigns the DELETED status to a DataSource, rendering it unusable.

After using the DeleteDataSource operation, you can use the GetDataSource operation to verify that the status of the DataSource changed to DELETED.

Caution: The results of the DeleteDataSource operation are irreversible.

deleteEvaluation(_:logger:on:)

public func deleteEvaluation(_ input: DeleteEvaluationInput, logger: Logger = AWSClient.loggingDisabled, on eventLoop: EventLoop? = nil) -> EventLoopFuture<DeleteEvaluationOutput> 

Assigns the DELETED status to an Evaluation, rendering it unusable.

After invoking the DeleteEvaluation operation, you can use the GetEvaluation operation to verify that the status of the Evaluation changed to DELETED.

Caution: The results of the DeleteEvaluation operation are irreversible.

deleteMLModel(_:logger:on:)

public func deleteMLModel(_ input: DeleteMLModelInput, logger: Logger = AWSClient.loggingDisabled, on eventLoop: EventLoop? = nil) -> EventLoopFuture<DeleteMLModelOutput> 

Assigns the DELETED status to an MLModel, rendering it unusable.

After using the DeleteMLModel operation, you can use the GetMLModel operation to verify that the status of the MLModel changed to DELETED.

    <p>
        <b>Caution:</b> The result of the <code>DeleteMLModel</code> operation is irreversible.</p>

deleteRealtimeEndpoint(_:logger:on:)

public func deleteRealtimeEndpoint(_ input: DeleteRealtimeEndpointInput, logger: Logger = AWSClient.loggingDisabled, on eventLoop: EventLoop? = nil) -> EventLoopFuture<DeleteRealtimeEndpointOutput> 

Deletes a real time endpoint of an MLModel.

deleteTags(_:logger:on:)

public func deleteTags(_ input: DeleteTagsInput, logger: Logger = AWSClient.loggingDisabled, on eventLoop: EventLoop? = nil) -> EventLoopFuture<DeleteTagsOutput> 

Deletes the specified tags associated with an ML object. After this operation is complete, you can't recover deleted tags.

If you specify a tag that doesn't exist, Amazon ML ignores it.

describeBatchPredictions(_:logger:on:)

public func describeBatchPredictions(_ input: DescribeBatchPredictionsInput, logger: Logger = AWSClient.loggingDisabled, on eventLoop: EventLoop? = nil) -> EventLoopFuture<DescribeBatchPredictionsOutput> 

Returns a list of BatchPrediction operations that match the search criteria in the request.

describeDataSources(_:logger:on:)

public func describeDataSources(_ input: DescribeDataSourcesInput, logger: Logger = AWSClient.loggingDisabled, on eventLoop: EventLoop? = nil) -> EventLoopFuture<DescribeDataSourcesOutput> 

Returns a list of DataSource that match the search criteria in the request.

describeEvaluations(_:logger:on:)

public func describeEvaluations(_ input: DescribeEvaluationsInput, logger: Logger = AWSClient.loggingDisabled, on eventLoop: EventLoop? = nil) -> EventLoopFuture<DescribeEvaluationsOutput> 

Returns a list of DescribeEvaluations that match the search criteria in the request.

describeMLModels(_:logger:on:)

public func describeMLModels(_ input: DescribeMLModelsInput, logger: Logger = AWSClient.loggingDisabled, on eventLoop: EventLoop? = nil) -> EventLoopFuture<DescribeMLModelsOutput> 

Returns a list of MLModel that match the search criteria in the request.

describeTags(_:logger:on:)

public func describeTags(_ input: DescribeTagsInput, logger: Logger = AWSClient.loggingDisabled, on eventLoop: EventLoop? = nil) -> EventLoopFuture<DescribeTagsOutput> 

Describes one or more of the tags for your Amazon ML object.

getBatchPrediction(_:logger:on:)

public func getBatchPrediction(_ input: GetBatchPredictionInput, logger: Logger = AWSClient.loggingDisabled, on eventLoop: EventLoop? = nil) -> EventLoopFuture<GetBatchPredictionOutput> 

Returns a BatchPrediction that includes detailed metadata, status, and data file information for a Batch Prediction request.

getDataSource(_:logger:on:)

public func getDataSource(_ input: GetDataSourceInput, logger: Logger = AWSClient.loggingDisabled, on eventLoop: EventLoop? = nil) -> EventLoopFuture<GetDataSourceOutput> 

Returns a DataSource that includes metadata and data file information, as well as the current status of the DataSource.

GetDataSource provides results in normal or verbose format. The verbose format adds the schema description and the list of files pointed to by the DataSource to the normal format.

getEvaluation(_:logger:on:)

public func getEvaluation(_ input: GetEvaluationInput, logger: Logger = AWSClient.loggingDisabled, on eventLoop: EventLoop? = nil) -> EventLoopFuture<GetEvaluationOutput> 

Returns an Evaluation that includes metadata as well as the current status of the Evaluation.

getMLModel(_:logger:on:)

public func getMLModel(_ input: GetMLModelInput, logger: Logger = AWSClient.loggingDisabled, on eventLoop: EventLoop? = nil) -> EventLoopFuture<GetMLModelOutput> 

Returns an MLModel that includes detailed metadata, data source information, and the current status of the MLModel.

GetMLModel provides results in normal or verbose format.

predict(_:logger:on:)

public func predict(_ input: PredictInput, logger: Logger = AWSClient.loggingDisabled, on eventLoop: EventLoop? = nil) -> EventLoopFuture<PredictOutput> 

Generates a prediction for the observation using the specified ML Model.

Note: Not all response parameters will be populated. Whether a response parameter is populated depends on the type of model requested.

updateBatchPrediction(_:logger:on:)

public func updateBatchPrediction(_ input: UpdateBatchPredictionInput, logger: Logger = AWSClient.loggingDisabled, on eventLoop: EventLoop? = nil) -> EventLoopFuture<UpdateBatchPredictionOutput> 

Updates the BatchPredictionName of a BatchPrediction.

You can use the GetBatchPrediction operation to view the contents of the updated data element.

updateDataSource(_:logger:on:)

public func updateDataSource(_ input: UpdateDataSourceInput, logger: Logger = AWSClient.loggingDisabled, on eventLoop: EventLoop? = nil) -> EventLoopFuture<UpdateDataSourceOutput> 

Updates the DataSourceName of a DataSource.

You can use the GetDataSource operation to view the contents of the updated data element.

updateEvaluation(_:logger:on:)

public func updateEvaluation(_ input: UpdateEvaluationInput, logger: Logger = AWSClient.loggingDisabled, on eventLoop: EventLoop? = nil) -> EventLoopFuture<UpdateEvaluationOutput> 

Updates the EvaluationName of an Evaluation.

You can use the GetEvaluation operation to view the contents of the updated data element.

updateMLModel(_:logger:on:)

public func updateMLModel(_ input: UpdateMLModelInput, logger: Logger = AWSClient.loggingDisabled, on eventLoop: EventLoop? = nil) -> EventLoopFuture<UpdateMLModelOutput> 

Updates the MLModelName and the ScoreThreshold of an MLModel.

You can use the GetMLModel operation to view the contents of the updated data element.

describeBatchPredictionsPaginator(_:logger:on:)

compiler(>=5.5.2) && canImport(_Concurrency)
public func describeBatchPredictionsPaginator(
        _ input: DescribeBatchPredictionsInput,
        logger: Logger = AWSClient.loggingDisabled,
        on eventLoop: EventLoop? = nil
    ) -> AWSClient.PaginatorSequence<DescribeBatchPredictionsInput, DescribeBatchPredictionsOutput> 

Returns a list of BatchPrediction operations that match the search criteria in the request.

Return PaginatorSequence for operation. - Parameters: - input: Input for request - logger: Logger used flot logging - eventLoop: EventLoop to run this process on

describeDataSourcesPaginator(_:logger:on:)

compiler(>=5.5.2) && canImport(_Concurrency)
public func describeDataSourcesPaginator(
        _ input: DescribeDataSourcesInput,
        logger: Logger = AWSClient.loggingDisabled,
        on eventLoop: EventLoop? = nil
    ) -> AWSClient.PaginatorSequence<DescribeDataSourcesInput, DescribeDataSourcesOutput> 

Returns a list of DataSource that match the search criteria in the request.

Return PaginatorSequence for operation. - Parameters: - input: Input for request - logger: Logger used flot logging - eventLoop: EventLoop to run this process on

describeEvaluationsPaginator(_:logger:on:)

compiler(>=5.5.2) && canImport(_Concurrency)
public func describeEvaluationsPaginator(
        _ input: DescribeEvaluationsInput,
        logger: Logger = AWSClient.loggingDisabled,
        on eventLoop: EventLoop? = nil
    ) -> AWSClient.PaginatorSequence<DescribeEvaluationsInput, DescribeEvaluationsOutput> 

Returns a list of DescribeEvaluations that match the search criteria in the request.

Return PaginatorSequence for operation. - Parameters: - input: Input for request - logger: Logger used flot logging - eventLoop: EventLoop to run this process on

describeMLModelsPaginator(_:logger:on:)

compiler(>=5.5.2) && canImport(_Concurrency)
public func describeMLModelsPaginator(
        _ input: DescribeMLModelsInput,
        logger: Logger = AWSClient.loggingDisabled,
        on eventLoop: EventLoop? = nil
    ) -> AWSClient.PaginatorSequence<DescribeMLModelsInput, DescribeMLModelsOutput> 

Returns a list of MLModel that match the search criteria in the request.

Return PaginatorSequence for operation. - Parameters: - input: Input for request - logger: Logger used flot logging - eventLoop: EventLoop to run this process on

describeBatchPredictionsPaginator(_:_:logger:on:onPage:)

Provide paginated results to closure onPage for it to combine them into one result. This works in a similar manner to Array.reduce<Result>(_:​_:​) -> Result.

public func describeBatchPredictionsPaginator<Result>(
        _ input: DescribeBatchPredictionsInput,
        _ initialValue: Result,
        logger: Logger = AWSClient.loggingDisabled,
        on eventLoop: EventLoop? = nil,
        onPage: @escaping (Result, DescribeBatchPredictionsOutput, EventLoop) -> EventLoopFuture<(Bool, Result)>
    ) -> EventLoopFuture<Result> 

Returns a list of BatchPrediction operations that match the search criteria in the request.

Parameters:

  • input: Input for request
  • initialValue: The value to use as the initial accumulating value. initialValue is passed to onPage the first time it is called.
  • logger: Logger used flot logging
  • eventLoop: EventLoop to run this process on
  • onPage: closure called with each paginated response. It combines an accumulating result with the contents of response. This combined result is then returned along with a boolean indicating if the paginate operation should continue.

describeBatchPredictionsPaginator(_:logger:on:onPage:)

Provide paginated results to closure onPage.

public func describeBatchPredictionsPaginator(
        _ input: DescribeBatchPredictionsInput,
        logger: Logger = AWSClient.loggingDisabled,
        on eventLoop: EventLoop? = nil,
        onPage: @escaping (DescribeBatchPredictionsOutput, EventLoop) -> EventLoopFuture<Bool>
    ) -> EventLoopFuture<Void> 

Parameters

  • input: Input for request
  • logger: Logger used flot logging
  • eventLoop: EventLoop to run this process on
  • onPage: closure called with each block of entries. Returns boolean indicating whether we should continue.

describeDataSourcesPaginator(_:_:logger:on:onPage:)

Provide paginated results to closure onPage for it to combine them into one result. This works in a similar manner to Array.reduce<Result>(_:​_:​) -> Result.

public func describeDataSourcesPaginator<Result>(
        _ input: DescribeDataSourcesInput,
        _ initialValue: Result,
        logger: Logger = AWSClient.loggingDisabled,
        on eventLoop: EventLoop? = nil,
        onPage: @escaping (Result, DescribeDataSourcesOutput, EventLoop) -> EventLoopFuture<(Bool, Result)>
    ) -> EventLoopFuture<Result> 

Returns a list of DataSource that match the search criteria in the request.

Parameters:

  • input: Input for request
  • initialValue: The value to use as the initial accumulating value. initialValue is passed to onPage the first time it is called.
  • logger: Logger used flot logging
  • eventLoop: EventLoop to run this process on
  • onPage: closure called with each paginated response. It combines an accumulating result with the contents of response. This combined result is then returned along with a boolean indicating if the paginate operation should continue.

describeDataSourcesPaginator(_:logger:on:onPage:)

Provide paginated results to closure onPage.

public func describeDataSourcesPaginator(
        _ input: DescribeDataSourcesInput,
        logger: Logger = AWSClient.loggingDisabled,
        on eventLoop: EventLoop? = nil,
        onPage: @escaping (DescribeDataSourcesOutput, EventLoop) -> EventLoopFuture<Bool>
    ) -> EventLoopFuture<Void> 

Parameters

  • input: Input for request
  • logger: Logger used flot logging
  • eventLoop: EventLoop to run this process on
  • onPage: closure called with each block of entries. Returns boolean indicating whether we should continue.

describeEvaluationsPaginator(_:_:logger:on:onPage:)

Provide paginated results to closure onPage for it to combine them into one result. This works in a similar manner to Array.reduce<Result>(_:​_:​) -> Result.

public func describeEvaluationsPaginator<Result>(
        _ input: DescribeEvaluationsInput,
        _ initialValue: Result,
        logger: Logger = AWSClient.loggingDisabled,
        on eventLoop: EventLoop? = nil,
        onPage: @escaping (Result, DescribeEvaluationsOutput, EventLoop) -> EventLoopFuture<(Bool, Result)>
    ) -> EventLoopFuture<Result> 

Returns a list of DescribeEvaluations that match the search criteria in the request.

Parameters:

  • input: Input for request
  • initialValue: The value to use as the initial accumulating value. initialValue is passed to onPage the first time it is called.
  • logger: Logger used flot logging
  • eventLoop: EventLoop to run this process on
  • onPage: closure called with each paginated response. It combines an accumulating result with the contents of response. This combined result is then returned along with a boolean indicating if the paginate operation should continue.

describeEvaluationsPaginator(_:logger:on:onPage:)

Provide paginated results to closure onPage.

public func describeEvaluationsPaginator(
        _ input: DescribeEvaluationsInput,
        logger: Logger = AWSClient.loggingDisabled,
        on eventLoop: EventLoop? = nil,
        onPage: @escaping (DescribeEvaluationsOutput, EventLoop) -> EventLoopFuture<Bool>
    ) -> EventLoopFuture<Void> 

Parameters

  • input: Input for request
  • logger: Logger used flot logging
  • eventLoop: EventLoop to run this process on
  • onPage: closure called with each block of entries. Returns boolean indicating whether we should continue.

describeMLModelsPaginator(_:_:logger:on:onPage:)

Provide paginated results to closure onPage for it to combine them into one result. This works in a similar manner to Array.reduce<Result>(_:​_:​) -> Result.

public func describeMLModelsPaginator<Result>(
        _ input: DescribeMLModelsInput,
        _ initialValue: Result,
        logger: Logger = AWSClient.loggingDisabled,
        on eventLoop: EventLoop? = nil,
        onPage: @escaping (Result, DescribeMLModelsOutput, EventLoop) -> EventLoopFuture<(Bool, Result)>
    ) -> EventLoopFuture<Result> 

Returns a list of MLModel that match the search criteria in the request.

Parameters:

  • input: Input for request
  • initialValue: The value to use as the initial accumulating value. initialValue is passed to onPage the first time it is called.
  • logger: Logger used flot logging
  • eventLoop: EventLoop to run this process on
  • onPage: closure called with each paginated response. It combines an accumulating result with the contents of response. This combined result is then returned along with a boolean indicating if the paginate operation should continue.

describeMLModelsPaginator(_:logger:on:onPage:)

Provide paginated results to closure onPage.

public func describeMLModelsPaginator(
        _ input: DescribeMLModelsInput,
        logger: Logger = AWSClient.loggingDisabled,
        on eventLoop: EventLoop? = nil,
        onPage: @escaping (DescribeMLModelsOutput, EventLoop) -> EventLoopFuture<Bool>
    ) -> EventLoopFuture<Void> 

Parameters

  • input: Input for request
  • logger: Logger used flot logging
  • eventLoop: EventLoop to run this process on
  • onPage: closure called with each block of entries. Returns boolean indicating whether we should continue.

waitUntilBatchPredictionAvailable(_:maxWaitTime:logger:on:)

public func waitUntilBatchPredictionAvailable(
        _ input: DescribeBatchPredictionsInput,
        maxWaitTime: TimeAmount? = nil,
        logger: Logger = AWSClient.loggingDisabled,
        on eventLoop: EventLoop? = nil
    ) -> EventLoopFuture<Void> 

waitUntilDataSourceAvailable(_:maxWaitTime:logger:on:)

public func waitUntilDataSourceAvailable(
        _ input: DescribeDataSourcesInput,
        maxWaitTime: TimeAmount? = nil,
        logger: Logger = AWSClient.loggingDisabled,
        on eventLoop: EventLoop? = nil
    ) -> EventLoopFuture<Void> 

waitUntilEvaluationAvailable(_:maxWaitTime:logger:on:)

public func waitUntilEvaluationAvailable(
        _ input: DescribeEvaluationsInput,
        maxWaitTime: TimeAmount? = nil,
        logger: Logger = AWSClient.loggingDisabled,
        on eventLoop: EventLoop? = nil
    ) -> EventLoopFuture<Void> 

waitUntilMLModelAvailable(_:maxWaitTime:logger:on:)

public func waitUntilMLModelAvailable(
        _ input: DescribeMLModelsInput,
        maxWaitTime: TimeAmount? = nil,
        logger: Logger = AWSClient.loggingDisabled,
        on eventLoop: EventLoop? = nil
    ) -> EventLoopFuture<Void>