Soto icon

Soto

Batch

Service object for interacting with AWS Batch service.

public struct Batch: AWSService 
Batch

Using Batch, you can run batch computing workloads on the Amazon Web Services Cloud. Batch computing is a common means for developers, scientists, and engineers to access large amounts of compute resources. Batch uses the advantages of this computing workload to remove the undifferentiated heavy lifting of configuring and managing required infrastructure. At the same time, it also adopts a familiar batch computing software approach. Given these advantages, Batch can help you to efficiently provision resources in response to jobs submitted, thus effectively helping you to eliminate capacity constraints, reduce compute costs, and deliver your results more quickly.

As a fully managed service, Batch can run batch computing workloads of any scale. Batch automatically provisions compute resources and optimizes workload distribution based on the quantity and scale of your specific workloads. With Batch, there's no need to install or manage batch computing software. This means that you can focus your time and energy on analyzing results and solving your specific problems.

Inheritance

AWSService

Initializers

init(client:region:partition:endpoint:timeout:byteBufferAllocator:options:)

Initialize the Batch client

public init(
        client: AWSClient,
        region: SotoCore.Region? = nil,
        partition: AWSPartition = .aws,
        endpoint: String? = nil,
        timeout: TimeAmount? = nil,
        byteBufferAllocator: ByteBufferAllocator = ByteBufferAllocator(),
        options: AWSServiceConfig.Options = []
    ) 

Parameters

  • client: AWSClient used to process requests
  • region: Region of server you want to communicate with. This will override the partition parameter.
  • partition: AWS partition where service resides, standard (.aws), china (.awscn), government (.awsusgov).
  • endpoint: Custom endpoint URL to use instead of standard AWS servers
  • timeout: Timeout value for HTTP requests

init(from:patch:)

Initializer required by AWSService.with(middlewares:​timeout:​byteBufferAllocator:​options). You are not able to use this initializer directly as there are no public initializers for AWSServiceConfig.Patch. Please use AWSService.with(middlewares:​timeout:​byteBufferAllocator:​options) instead.

public init(from: Batch, patch: AWSServiceConfig.Patch) 

Properties

client

Client used for communication with AWS

public let client: AWSClient

config

Service configuration

public let config: AWSServiceConfig

Methods

cancelJob(_:logger:on:)

public func cancelJob(_ input: CancelJobRequest, logger: Logger = AWSClient.loggingDisabled, on eventLoop: EventLoop? = nil) -> EventLoopFuture<CancelJobResponse> 

Cancels a job in an Batch job queue. Jobs that are in the SUBMITTED, PENDING, or RUNNABLE state are canceled. Jobs that have progressed to STARTING or RUNNING aren't canceled, but the API operation still succeeds, even if no job is canceled. These jobs must be terminated with the TerminateJob operation.

createComputeEnvironment(_:logger:on:)

public func createComputeEnvironment(_ input: CreateComputeEnvironmentRequest, logger: Logger = AWSClient.loggingDisabled, on eventLoop: EventLoop? = nil) -> EventLoopFuture<CreateComputeEnvironmentResponse> 

Creates an Batch compute environment. You can create MANAGED or UNMANAGED compute environments. MANAGED compute environments can use Amazon EC2 or Fargate resources. UNMANAGED compute environments can only use EC2 resources.

In a managed compute environment, Batch manages the capacity and instance types of the compute resources within the environment. This is based on the compute resource specification that you define or the launch template that you specify when you create the compute environment. Either, you can choose to use EC2 On-Demand Instances and EC2 Spot Instances. Or, you can use Fargate and Fargate Spot capacity in your managed compute environment. You can optionally set a maximum price so that Spot Instances only launch when the Spot Instance price is less than a specified percentage of the On-Demand price.

Multi-node parallel jobs aren't supported on Spot Instances.

In an unmanaged compute environment, you can manage your own EC2 compute resources and have a lot of flexibility with how you configure your compute resources. For example, you can use custom AMIs. However, you must verify that each of your AMIs meet the Amazon ECS container instance AMI specification. For more information, see container instance AMIs in the Amazon Elastic Container Service Developer Guide. After you created your unmanaged compute environment, you can use the DescribeComputeEnvironments operation to find the Amazon ECS cluster that's associated with it. Then, launch your container instances into that Amazon ECS cluster. For more information, see Launching an Amazon ECS container instance in the Amazon Elastic Container Service Developer Guide.

Batch doesn't upgrade the AMIs in a compute environment after the environment is created. For example, it doesn't update the AMIs when a newer version of the Amazon ECS optimized AMI is available. Therefore, you're responsible for managing the guest operating system (including its updates and security patches) and any additional application software or utilities that you install on the compute resources. To use a new AMI for your Batch jobs, complete these steps:

  1. Create a new compute environment with the new AMI.

  2. Add the compute environment to an existing job queue.

  3. Remove the earlier compute environment from your job queue.

  4. Delete the earlier compute environment.

createJobQueue(_:logger:on:)

public func createJobQueue(_ input: CreateJobQueueRequest, logger: Logger = AWSClient.loggingDisabled, on eventLoop: EventLoop? = nil) -> EventLoopFuture<CreateJobQueueResponse> 

Creates an Batch job queue. When you create a job queue, you associate one or more compute environments to the queue and assign an order of preference for the compute environments.

You also set a priority to the job queue that determines the order that the Batch scheduler places jobs onto its associated compute environments. For example, if a compute environment is associated with more than one job queue, the job queue with a higher priority is given preference for scheduling jobs to that compute environment.

createSchedulingPolicy(_:logger:on:)

public func createSchedulingPolicy(_ input: CreateSchedulingPolicyRequest, logger: Logger = AWSClient.loggingDisabled, on eventLoop: EventLoop? = nil) -> EventLoopFuture<CreateSchedulingPolicyResponse> 

Creates an Batch scheduling policy.

deleteComputeEnvironment(_:logger:on:)

public func deleteComputeEnvironment(_ input: DeleteComputeEnvironmentRequest, logger: Logger = AWSClient.loggingDisabled, on eventLoop: EventLoop? = nil) -> EventLoopFuture<DeleteComputeEnvironmentResponse> 

Deletes an Batch compute environment.

Before you can delete a compute environment, you must set its state to DISABLED with the UpdateComputeEnvironment API operation and disassociate it from any job queues with the UpdateJobQueue API operation. Compute environments that use Fargate resources must terminate all active jobs on that compute environment before deleting the compute environment. If this isn't done, the compute environment enters an invalid state.

deleteJobQueue(_:logger:on:)

public func deleteJobQueue(_ input: DeleteJobQueueRequest, logger: Logger = AWSClient.loggingDisabled, on eventLoop: EventLoop? = nil) -> EventLoopFuture<DeleteJobQueueResponse> 

Deletes the specified job queue. You must first disable submissions for a queue with the UpdateJobQueue operation. All jobs in the queue are eventually terminated when you delete a job queue. The jobs are terminated at a rate of about 16 jobs each second.

It's not necessary to disassociate compute environments from a queue before submitting a DeleteJobQueue request.

deleteSchedulingPolicy(_:logger:on:)

public func deleteSchedulingPolicy(_ input: DeleteSchedulingPolicyRequest, logger: Logger = AWSClient.loggingDisabled, on eventLoop: EventLoop? = nil) -> EventLoopFuture<DeleteSchedulingPolicyResponse> 

Deletes the specified scheduling policy.

You can't delete a scheduling policy that's used in any job queues.

deregisterJobDefinition(_:logger:on:)

public func deregisterJobDefinition(_ input: DeregisterJobDefinitionRequest, logger: Logger = AWSClient.loggingDisabled, on eventLoop: EventLoop? = nil) -> EventLoopFuture<DeregisterJobDefinitionResponse> 

Deregisters an Batch job definition. Job definitions are permanently deleted after 180 days.

describeComputeEnvironments(_:logger:on:)

public func describeComputeEnvironments(_ input: DescribeComputeEnvironmentsRequest, logger: Logger = AWSClient.loggingDisabled, on eventLoop: EventLoop? = nil) -> EventLoopFuture<DescribeComputeEnvironmentsResponse> 

Describes one or more of your compute environments.

If you're using an unmanaged compute environment, you can use the DescribeComputeEnvironment operation to determine the ecsClusterArn that you launch your Amazon ECS container instances into.

describeJobDefinitions(_:logger:on:)

public func describeJobDefinitions(_ input: DescribeJobDefinitionsRequest, logger: Logger = AWSClient.loggingDisabled, on eventLoop: EventLoop? = nil) -> EventLoopFuture<DescribeJobDefinitionsResponse> 

Describes a list of job definitions. You can specify a status (such as ACTIVE) to only return job definitions that match that status.

describeJobQueues(_:logger:on:)

public func describeJobQueues(_ input: DescribeJobQueuesRequest, logger: Logger = AWSClient.loggingDisabled, on eventLoop: EventLoop? = nil) -> EventLoopFuture<DescribeJobQueuesResponse> 

Describes one or more of your job queues.

describeJobs(_:logger:on:)

public func describeJobs(_ input: DescribeJobsRequest, logger: Logger = AWSClient.loggingDisabled, on eventLoop: EventLoop? = nil) -> EventLoopFuture<DescribeJobsResponse> 

Describes a list of Batch jobs.

describeSchedulingPolicies(_:logger:on:)

public func describeSchedulingPolicies(_ input: DescribeSchedulingPoliciesRequest, logger: Logger = AWSClient.loggingDisabled, on eventLoop: EventLoop? = nil) -> EventLoopFuture<DescribeSchedulingPoliciesResponse> 

Describes one or more of your scheduling policies.

listJobs(_:logger:on:)

public func listJobs(_ input: ListJobsRequest, logger: Logger = AWSClient.loggingDisabled, on eventLoop: EventLoop? = nil) -> EventLoopFuture<ListJobsResponse> 

Returns a list of Batch jobs.

You must specify only one of the following items:

  • A job queue ID to return a list of jobs in that job queue

  • A multi-node parallel job ID to return a list of nodes for that job

  • An array job ID to return a list of the children for that job

You can filter the results by job status with the jobStatus parameter. If you don't specify a status, only RUNNING jobs are returned.

listSchedulingPolicies(_:logger:on:)

public func listSchedulingPolicies(_ input: ListSchedulingPoliciesRequest, logger: Logger = AWSClient.loggingDisabled, on eventLoop: EventLoop? = nil) -> EventLoopFuture<ListSchedulingPoliciesResponse> 

Returns a list of Batch scheduling policies.

listTagsForResource(_:logger:on:)

public func listTagsForResource(_ input: ListTagsForResourceRequest, logger: Logger = AWSClient.loggingDisabled, on eventLoop: EventLoop? = nil) -> EventLoopFuture<ListTagsForResourceResponse> 

Lists the tags for an Batch resource. Batch resources that support tags are compute environments, jobs, job definitions, job queues, and scheduling policies. ARNs for child jobs of array and multi-node parallel (MNP) jobs are not supported.

registerJobDefinition(_:logger:on:)

public func registerJobDefinition(_ input: RegisterJobDefinitionRequest, logger: Logger = AWSClient.loggingDisabled, on eventLoop: EventLoop? = nil) -> EventLoopFuture<RegisterJobDefinitionResponse> 

Registers an Batch job definition.

submitJob(_:logger:on:)

public func submitJob(_ input: SubmitJobRequest, logger: Logger = AWSClient.loggingDisabled, on eventLoop: EventLoop? = nil) -> EventLoopFuture<SubmitJobResponse> 

Submits an Batch job from a job definition. Parameters that are specified during SubmitJob override parameters defined in the job definition. vCPU and memory requirements that are specified in the resourceRequirements objects in the job definition are the exception. They can't be overridden this way using the memory and vcpus parameters. Rather, you must specify updates to job definition parameters in a resourceRequirements object that's included in the containerOverrides parameter.

Job queues with a scheduling policy are limited to 500 active fair share identifiers at a time.

Jobs that run on Fargate resources can't be guaranteed to run for more than 14 days. This is because, after 14 days, Fargate resources might become unavailable and job might be terminated.

tagResource(_:logger:on:)

public func tagResource(_ input: TagResourceRequest, logger: Logger = AWSClient.loggingDisabled, on eventLoop: EventLoop? = nil) -> EventLoopFuture<TagResourceResponse> 

Associates the specified tags to a resource with the specified resourceArn. If existing tags on a resource aren't specified in the request parameters, they aren't changed. When a resource is deleted, the tags that are associated with that resource are deleted as well. Batch resources that support tags are compute environments, jobs, job definitions, job queues, and scheduling policies. ARNs for child jobs of array and multi-node parallel (MNP) jobs are not supported.

terminateJob(_:logger:on:)

public func terminateJob(_ input: TerminateJobRequest, logger: Logger = AWSClient.loggingDisabled, on eventLoop: EventLoop? = nil) -> EventLoopFuture<TerminateJobResponse> 

Terminates a job in a job queue. Jobs that are in the STARTING or RUNNING state are terminated, which causes them to transition to FAILED. Jobs that have not progressed to the STARTING state are cancelled.

untagResource(_:logger:on:)

public func untagResource(_ input: UntagResourceRequest, logger: Logger = AWSClient.loggingDisabled, on eventLoop: EventLoop? = nil) -> EventLoopFuture<UntagResourceResponse> 

Deletes specified tags from an Batch resource.

updateComputeEnvironment(_:logger:on:)

public func updateComputeEnvironment(_ input: UpdateComputeEnvironmentRequest, logger: Logger = AWSClient.loggingDisabled, on eventLoop: EventLoop? = nil) -> EventLoopFuture<UpdateComputeEnvironmentResponse> 

Updates an Batch compute environment.

updateJobQueue(_:logger:on:)

public func updateJobQueue(_ input: UpdateJobQueueRequest, logger: Logger = AWSClient.loggingDisabled, on eventLoop: EventLoop? = nil) -> EventLoopFuture<UpdateJobQueueResponse> 

Updates a job queue.

updateSchedulingPolicy(_:logger:on:)

public func updateSchedulingPolicy(_ input: UpdateSchedulingPolicyRequest, logger: Logger = AWSClient.loggingDisabled, on eventLoop: EventLoop? = nil) -> EventLoopFuture<UpdateSchedulingPolicyResponse> 

Updates a scheduling policy.

describeComputeEnvironmentsPaginator(_:logger:on:)

compiler(>=5.5.2) && canImport(_Concurrency)
public func describeComputeEnvironmentsPaginator(
        _ input: DescribeComputeEnvironmentsRequest,
        logger: Logger = AWSClient.loggingDisabled,
        on eventLoop: EventLoop? = nil
    ) -> AWSClient.PaginatorSequence<DescribeComputeEnvironmentsRequest, DescribeComputeEnvironmentsResponse> 

Describes one or more of your compute environments.

If you're using an unmanaged compute environment, you can use the DescribeComputeEnvironment operation to determine the ecsClusterArn that you launch your Amazon ECS container instances into.

Return PaginatorSequence for operation. - Parameters: - input: Input for request - logger: Logger used flot logging - eventLoop: EventLoop to run this process on

describeJobDefinitionsPaginator(_:logger:on:)

compiler(>=5.5.2) && canImport(_Concurrency)
public func describeJobDefinitionsPaginator(
        _ input: DescribeJobDefinitionsRequest,
        logger: Logger = AWSClient.loggingDisabled,
        on eventLoop: EventLoop? = nil
    ) -> AWSClient.PaginatorSequence<DescribeJobDefinitionsRequest, DescribeJobDefinitionsResponse> 

Describes a list of job definitions. You can specify a status (such as ACTIVE) to only return job definitions that match that status.

Return PaginatorSequence for operation. - Parameters: - input: Input for request - logger: Logger used flot logging - eventLoop: EventLoop to run this process on

describeJobQueuesPaginator(_:logger:on:)

compiler(>=5.5.2) && canImport(_Concurrency)
public func describeJobQueuesPaginator(
        _ input: DescribeJobQueuesRequest,
        logger: Logger = AWSClient.loggingDisabled,
        on eventLoop: EventLoop? = nil
    ) -> AWSClient.PaginatorSequence<DescribeJobQueuesRequest, DescribeJobQueuesResponse> 

Describes one or more of your job queues.

Return PaginatorSequence for operation. - Parameters: - input: Input for request - logger: Logger used flot logging - eventLoop: EventLoop to run this process on

listJobsPaginator(_:logger:on:)

compiler(>=5.5.2) && canImport(_Concurrency)
public func listJobsPaginator(
        _ input: ListJobsRequest,
        logger: Logger = AWSClient.loggingDisabled,
        on eventLoop: EventLoop? = nil
    ) -> AWSClient.PaginatorSequence<ListJobsRequest, ListJobsResponse> 

Returns a list of Batch jobs.

You must specify only one of the following items:

  • A job queue ID to return a list of jobs in that job queue

  • A multi-node parallel job ID to return a list of nodes for that job

  • An array job ID to return a list of the children for that job

You can filter the results by job status with the jobStatus parameter. If you don't specify a status, only RUNNING jobs are returned.

Return PaginatorSequence for operation. - Parameters: - input: Input for request - logger: Logger used flot logging - eventLoop: EventLoop to run this process on

listSchedulingPoliciesPaginator(_:logger:on:)

compiler(>=5.5.2) && canImport(_Concurrency)
public func listSchedulingPoliciesPaginator(
        _ input: ListSchedulingPoliciesRequest,
        logger: Logger = AWSClient.loggingDisabled,
        on eventLoop: EventLoop? = nil
    ) -> AWSClient.PaginatorSequence<ListSchedulingPoliciesRequest, ListSchedulingPoliciesResponse> 

Returns a list of Batch scheduling policies.

Return PaginatorSequence for operation. - Parameters: - input: Input for request - logger: Logger used flot logging - eventLoop: EventLoop to run this process on

describeComputeEnvironmentsPaginator(_:_:logger:on:onPage:)

Provide paginated results to closure onPage for it to combine them into one result. This works in a similar manner to Array.reduce<Result>(_:​_:​) -> Result.

public func describeComputeEnvironmentsPaginator<Result>(
        _ input: DescribeComputeEnvironmentsRequest,
        _ initialValue: Result,
        logger: Logger = AWSClient.loggingDisabled,
        on eventLoop: EventLoop? = nil,
        onPage: @escaping (Result, DescribeComputeEnvironmentsResponse, EventLoop) -> EventLoopFuture<(Bool, Result)>
    ) -> EventLoopFuture<Result> 

Describes one or more of your compute environments.

If you're using an unmanaged compute environment, you can use the DescribeComputeEnvironment operation to determine the ecsClusterArn that you launch your Amazon ECS container instances into.

Parameters:

  • input: Input for request
  • initialValue: The value to use as the initial accumulating value. initialValue is passed to onPage the first time it is called.
  • logger: Logger used flot logging
  • eventLoop: EventLoop to run this process on
  • onPage: closure called with each paginated response. It combines an accumulating result with the contents of response. This combined result is then returned along with a boolean indicating if the paginate operation should continue.

describeComputeEnvironmentsPaginator(_:logger:on:onPage:)

Provide paginated results to closure onPage.

public func describeComputeEnvironmentsPaginator(
        _ input: DescribeComputeEnvironmentsRequest,
        logger: Logger = AWSClient.loggingDisabled,
        on eventLoop: EventLoop? = nil,
        onPage: @escaping (DescribeComputeEnvironmentsResponse, EventLoop) -> EventLoopFuture<Bool>
    ) -> EventLoopFuture<Void> 

Parameters

  • input: Input for request
  • logger: Logger used flot logging
  • eventLoop: EventLoop to run this process on
  • onPage: closure called with each block of entries. Returns boolean indicating whether we should continue.

describeJobDefinitionsPaginator(_:_:logger:on:onPage:)

Provide paginated results to closure onPage for it to combine them into one result. This works in a similar manner to Array.reduce<Result>(_:​_:​) -> Result.

public func describeJobDefinitionsPaginator<Result>(
        _ input: DescribeJobDefinitionsRequest,
        _ initialValue: Result,
        logger: Logger = AWSClient.loggingDisabled,
        on eventLoop: EventLoop? = nil,
        onPage: @escaping (Result, DescribeJobDefinitionsResponse, EventLoop) -> EventLoopFuture<(Bool, Result)>
    ) -> EventLoopFuture<Result> 

Describes a list of job definitions. You can specify a status (such as ACTIVE) to only return job definitions that match that status.

Parameters:

  • input: Input for request
  • initialValue: The value to use as the initial accumulating value. initialValue is passed to onPage the first time it is called.
  • logger: Logger used flot logging
  • eventLoop: EventLoop to run this process on
  • onPage: closure called with each paginated response. It combines an accumulating result with the contents of response. This combined result is then returned along with a boolean indicating if the paginate operation should continue.

describeJobDefinitionsPaginator(_:logger:on:onPage:)

Provide paginated results to closure onPage.

public func describeJobDefinitionsPaginator(
        _ input: DescribeJobDefinitionsRequest,
        logger: Logger = AWSClient.loggingDisabled,
        on eventLoop: EventLoop? = nil,
        onPage: @escaping (DescribeJobDefinitionsResponse, EventLoop) -> EventLoopFuture<Bool>
    ) -> EventLoopFuture<Void> 

Parameters

  • input: Input for request
  • logger: Logger used flot logging
  • eventLoop: EventLoop to run this process on
  • onPage: closure called with each block of entries. Returns boolean indicating whether we should continue.

describeJobQueuesPaginator(_:_:logger:on:onPage:)

Provide paginated results to closure onPage for it to combine them into one result. This works in a similar manner to Array.reduce<Result>(_:​_:​) -> Result.

public func describeJobQueuesPaginator<Result>(
        _ input: DescribeJobQueuesRequest,
        _ initialValue: Result,
        logger: Logger = AWSClient.loggingDisabled,
        on eventLoop: EventLoop? = nil,
        onPage: @escaping (Result, DescribeJobQueuesResponse, EventLoop) -> EventLoopFuture<(Bool, Result)>
    ) -> EventLoopFuture<Result> 

Describes one or more of your job queues.

Parameters:

  • input: Input for request
  • initialValue: The value to use as the initial accumulating value. initialValue is passed to onPage the first time it is called.
  • logger: Logger used flot logging
  • eventLoop: EventLoop to run this process on
  • onPage: closure called with each paginated response. It combines an accumulating result with the contents of response. This combined result is then returned along with a boolean indicating if the paginate operation should continue.

describeJobQueuesPaginator(_:logger:on:onPage:)

Provide paginated results to closure onPage.

public func describeJobQueuesPaginator(
        _ input: DescribeJobQueuesRequest,
        logger: Logger = AWSClient.loggingDisabled,
        on eventLoop: EventLoop? = nil,
        onPage: @escaping (DescribeJobQueuesResponse, EventLoop) -> EventLoopFuture<Bool>
    ) -> EventLoopFuture<Void> 

Parameters

  • input: Input for request
  • logger: Logger used flot logging
  • eventLoop: EventLoop to run this process on
  • onPage: closure called with each block of entries. Returns boolean indicating whether we should continue.

listJobsPaginator(_:_:logger:on:onPage:)

Provide paginated results to closure onPage for it to combine them into one result. This works in a similar manner to Array.reduce<Result>(_:​_:​) -> Result.

public func listJobsPaginator<Result>(
        _ input: ListJobsRequest,
        _ initialValue: Result,
        logger: Logger = AWSClient.loggingDisabled,
        on eventLoop: EventLoop? = nil,
        onPage: @escaping (Result, ListJobsResponse, EventLoop) -> EventLoopFuture<(Bool, Result)>
    ) -> EventLoopFuture<Result> 

Returns a list of Batch jobs.

You must specify only one of the following items:

  • A job queue ID to return a list of jobs in that job queue

  • A multi-node parallel job ID to return a list of nodes for that job

  • An array job ID to return a list of the children for that job

You can filter the results by job status with the jobStatus parameter. If you don't specify a status, only RUNNING jobs are returned.

Parameters:

  • input: Input for request
  • initialValue: The value to use as the initial accumulating value. initialValue is passed to onPage the first time it is called.
  • logger: Logger used flot logging
  • eventLoop: EventLoop to run this process on
  • onPage: closure called with each paginated response. It combines an accumulating result with the contents of response. This combined result is then returned along with a boolean indicating if the paginate operation should continue.

listJobsPaginator(_:logger:on:onPage:)

Provide paginated results to closure onPage.

public func listJobsPaginator(
        _ input: ListJobsRequest,
        logger: Logger = AWSClient.loggingDisabled,
        on eventLoop: EventLoop? = nil,
        onPage: @escaping (ListJobsResponse, EventLoop) -> EventLoopFuture<Bool>
    ) -> EventLoopFuture<Void> 

Parameters

  • input: Input for request
  • logger: Logger used flot logging
  • eventLoop: EventLoop to run this process on
  • onPage: closure called with each block of entries. Returns boolean indicating whether we should continue.

listSchedulingPoliciesPaginator(_:_:logger:on:onPage:)

Provide paginated results to closure onPage for it to combine them into one result. This works in a similar manner to Array.reduce<Result>(_:​_:​) -> Result.

public func listSchedulingPoliciesPaginator<Result>(
        _ input: ListSchedulingPoliciesRequest,
        _ initialValue: Result,
        logger: Logger = AWSClient.loggingDisabled,
        on eventLoop: EventLoop? = nil,
        onPage: @escaping (Result, ListSchedulingPoliciesResponse, EventLoop) -> EventLoopFuture<(Bool, Result)>
    ) -> EventLoopFuture<Result> 

Returns a list of Batch scheduling policies.

Parameters:

  • input: Input for request
  • initialValue: The value to use as the initial accumulating value. initialValue is passed to onPage the first time it is called.
  • logger: Logger used flot logging
  • eventLoop: EventLoop to run this process on
  • onPage: closure called with each paginated response. It combines an accumulating result with the contents of response. This combined result is then returned along with a boolean indicating if the paginate operation should continue.

listSchedulingPoliciesPaginator(_:logger:on:onPage:)

Provide paginated results to closure onPage.

public func listSchedulingPoliciesPaginator(
        _ input: ListSchedulingPoliciesRequest,
        logger: Logger = AWSClient.loggingDisabled,
        on eventLoop: EventLoop? = nil,
        onPage: @escaping (ListSchedulingPoliciesResponse, EventLoop) -> EventLoopFuture<Bool>
    ) -> EventLoopFuture<Void> 

Parameters

  • input: Input for request
  • logger: Logger used flot logging
  • eventLoop: EventLoop to run this process on
  • onPage: closure called with each block of entries. Returns boolean indicating whether we should continue.