apex.server.process and async/await: What the Tutorials Skip

.

If you write JavaScript in Oracle APEX, you almost certainly use apex.server.process to call PL/SQL from the browser. It is the bread and butter of any non-trivial APEX page: fetching data on demand, validating before submit, triggering server-side work without a full page reload.

There are plenty of tutorials on how to use it with async/await. Vincent Morneau wrote the canonical one back in 2020. There are newer takes on Hashnode and other community blogs. They all show roughly the same pattern: wrap apex.server.process in a new Promise, then await the wrapper.

What none of them explain clearly is why you wrap it, or whether you even need to. The honest answer involves understanding what apex.server.process actually returns and how that interacts with await. This post covers that, and then moves on to the patterns I find genuinely useful in production APEX code: parallel calls with Promise.all, integrating errors with apex.message, and aborting in-flight requests.

What apex.server.process actually returns

The APEX JavaScript API documentation says:

A promise object is returned. The promise done method is called if the Ajax request completes successfully... The promise also has an always method that is called after done and error.

"Promise" here is doing a lot of work. What apex.server.process actually returns is a jQuery deferred, which has .done(), .fail(), and .always() methods rather than the standard .then() and .catch(). jQuery deferreds are thenable, meaning they conform enough to the Promise A+ spec that await can work with them, but their behaviour differs from a native Promise in two important ways.

First, the rejection callback signature is different. A native Promise's .catch(err) gives you a single error argument. A jQuery deferred's .fail(jqXHR, textStatus, errorThrown) gives you three. When you await a deferred and it rejects, the throw is constructed from the deferred's rejection arguments and the result is not the clean Error object you might expect.

Second, jQuery deferreds historically swallowed errors thrown inside .done() handlers, though modern jQuery (3.x and later) has fixed this. APEX has shipped recent jQuery for a while, so this is mostly a historical concern, but it surfaces in older blog posts as "always wrap to avoid silent failures."

Now to the practical question. Do you actually need to wrap it?

Most existing tutorials show this pattern:

function getproducts(categoryid) {
   return new Promise((resolve, reject) => {
      apex.server.process(
         'GET_PRODUCTS',
         { x01: categoryid },
         {
            dataType: 'json',
            success: resolve,
            error: reject
         }
      );
   });
}

This converts the jQuery deferred into a native Promise with standard .then()/.catch() semantics. It works. But the simpler version also works in modern APEX:

async function getproducts(categoryid) {
   return apex.server.process(
      'GET_PRODUCTS',
      { x01: categoryid },
      { dataType: 'json' }
   );
}

The async keyword wraps whatever the function returns into a Promise, and await on the returned thenable will resolve to the JSON response. No manual wrapping needed.

When the simpler version is fine:

  • You only care about the success value.

  • You handle errors with try/catch around the await call.

  • You don't need to inspect the jQuery-specific error arguments (jqXHR, textStatus, errorThrown).

When you should wrap it explicitly:

  • You need to attach jQuery-specific success/error logic via the options object and return a clean Promise to callers.

  • You need to inspect the full rejection trio to distinguish, say, a timeout from a server-side error.

  • You're building a library function that other developers will consume and you want predictable Promise semantics.

For most page-level JavaScript in APEX applications, the simpler version is what I reach for first.

The error handling story

The most common bug I see in async/await code that calls apex.server.process is missing or sloppy error handling. The pattern that works reliably:

async function loadcategorycount() {
   try {
      const result = await apex.server.process(
         'GET_CATEGORY_COUNT',
         { x01: $v('P10_CATEGORY_ID') },
         { dataType: 'json' }
      );

      $s('P10_COUNT', result.count);
   } catch (err) {
      apex.message.showErrors([{
         type: 'error',
         location: 'page',
         message: 'Could not load category count.',
         unsafe: false
      }]);
      console.error('getcategorycount failed:', err);
   }
}

A few things worth noting:

Always wrap the await in try/catch.

An unhandled rejection in an async function becomes an unhandled Promise rejection at the global level. In the browser, that surfaces as a console warning and silent failure. The user sees nothing.

Use apex.message.showErrors rather than alert or console.log alone.

The apex.message API renders errors in the standard APEX notification region, which means they look like every other validation error the user sees and obey the same dismissal rules. The location property accepts 'page' for the top notification region or 'inline' for a specific page item.

Log the underlying error to the console.

Whatever you show the user should be friendly. The actual err object often contains useful diagnostic information that you want available when someone reports a bug.

One thing to be cautious about: if the PL/SQL on-demand process raises an exception, what apex.server.process returns in the error path depends on how the error is surfaced.

A RAISE_APPLICATION_ERROR in your on-demand process produces an HTTP 500 response that the deferred treats as a failure. A PL/SQL block that completes successfully but returns an error structure in its JSON payload is a success from the AJAX layer's point of view. You need to handle the second case yourself:

const result = await apex.server.process('SAVE_ORDER', { ... });

if (result.status === 'error') {
   apex.message.showErrors([{
      type: 'error',
      location: 'page',
      message: result.message,
      unsafe: false
   }]);
   return;
}

// proceed with success path

I prefer returning structured error responses from on-demand processes rather than raising exceptions, because it gives the JavaScript side cleaner control over the user experience. But that is an opinion, not a rule.

Parallel calls with Promise.all

The pattern I find most under-used in existing APEX code is running multiple server processes in parallel. The sequential version:

async function loaddashboard() {
   const orders = await apex.server.process('GET_ORDERS', { dataType: 'json' });
   const products = await apex.server.process('GET_TOP_PRODUCTS', { dataType: 'json' });
   const customers = await apex.server.process('GET_NEW_CUSTOMERS', { dataType: 'json' });

   renderdashboard(orders, products, customers);
}

Three round trips to the server, each waiting for the previous one to finish. If each takes 200ms, the dashboard takes 600ms to load.

The same code with Promise.all runs all three in parallel:

async function loaddashboard() {
   try {
      const [orders, products, customers] = await Promise.all([
         apex.server.process('GET_ORDERS', {}, { dataType: 'json' }),
         apex.server.process('GET_TOP_PRODUCTS', {}, { dataType: 'json' }),
         apex.server.process('GET_NEW_CUSTOMERS', {}, { dataType: 'json' })
      ]);

      renderdashboard(orders, products, customers);
   } catch (err) {
      apex.message.showErrors([{
         type: 'error',
         location: 'page',
         message: 'Some dashboard data could not load.',
         unsafe: false
      }]);
      console.error('loaddashboard failed:', err);
   }
}

Now the total time is whatever the slowest call takes. On a dashboard with three independent data fetches, this is often the difference between "feels responsive" and "feels slow."

Promise.all has one important property: if any of the promises reject, the whole thing rejects immediately, and the other in-flight calls keep running but their results are discarded. If you want partial success behaviour (render what loaded, show errors for what failed), use Promise.allSettled instead:

const results = await Promise.allSettled([
   apex.server.process('GET_ORDERS', {}, { dataType: 'json' }),
   apex.server.process('GET_TOP_PRODUCTS', {}, { dataType: 'json' }),
   apex.server.process('GET_NEW_CUSTOMERS', {}, { dataType: 'json' })
]);

results.forEach((r, i) => {
   if (r.status === 'fulfilled') {
      // render the successful one
   } else {
      // log or display the rejection
   }
});

I default to Promise.all when the page genuinely needs all three results, and Promise.allSettled when partial rendering is acceptable. The choice depends on how the page degrades when one call fails.

Sequencing dependent calls

Promise.all is for independent calls. If one call depends on the result of another, sequential await is the right pattern:

async function placeorder() {
   const customer = await apex.server.process('VALIDATE_CUSTOMER', { x01: $v('P20_CUSTOMER_ID') }, { dataType: 'json' });

   if (customer.status !== 'active') {
      apex.message.showErrors([{
         type: 'error',
         location: 'page',
         message: 'Customer is not active. Cannot place order.',
         unsafe: false
      }]);
      return;
   }

   const order = await apex.server.process('CREATE_ORDER', {
      x01: customer.id,
      x02: $v('P20_PRODUCT_ID'),
      x03: $v('P20_QUANTITY')
   }, { dataType: 'json' });

   $s('P20_ORDER_ID', order.id);
   apex.message.showPageSuccess('Order created: ' + order.reference);
}

The validate call has to complete before the create call has the customer ID to work with. Each await blocks the next line until the server responds. This is the right shape when the data flow is genuinely sequential.

What you want to avoid is the accidental sequential pattern: two await calls in a row that don't actually depend on each other. That is Promise.all territory.

Aborting in-flight requests

This is a feature most APEX developers don't realise they have. The promise returned by apex.server.process has an .abort() method, documented in the Oracle API reference. Useful for type-ahead search, dependent select lists, anything where a new request invalidates an in-flight one.

let currentrequest = null;

async function searchproducts(searchterm) {
   if (currentrequest) {
      currentrequest.abort();
   }

   currentrequest = apex.server.process('SEARCH_PRODUCTS', { x01: searchterm }, { dataType: 'json' });

   try {
      const results = await currentrequest;
      renderresults(results);
   } catch (err) {
      // ignore abort errors, but log real failures
      if (err && err.statusText !== 'abort') {
         console.error('searchproducts failed:', err);
      }
   }
}

Two things to watch for. First, the abort method documented in the APEX docs notes that abort does not work for requests that use any queue options. If you have configured pOptions with a queue setting, you cannot cancel mid-flight. Second, an aborted request rejects the promise with an error whose statusText is 'abort'. You probably want to filter that out of your error display, since the user did not encounter a failure, you cancelled the call on their behalf.

A note on the queue option

The queue option in apex.server.process is worth a paragraph because it changes the async story. By default, every call is independent and runs in parallel with anything else in flight. If you specify a queue name, calls in the same queue run in sequence and can be configured to abort previous calls in the queue.

apex.server.process('SEARCH_PRODUCTS', { x01: term }, {
   dataType: 'json',
   queue: { name: 'search', action: 'replace' }
});

With action: 'replace', a new call in the search queue aborts any pending call in that queue. This is built-in debouncing for the common case of type-ahead inputs, and it removes the need for the manual abort pattern above. The trade-off is that the returned promise's .abort() method does not work when queue options are in use.

I tend to use the manual abort pattern when I need fine-grained control, and the queue option when the behaviour I want is "always abort the previous in-flight call." Both are valid choices.

Summary

The short version of this post is:

  1. apex.server.process returns a jQuery deferred, which is thenable. You can await it directly in modern APEX without wrapping it in a new Promise.

  2. Always wrap the await in try/catch and surface errors through apex.message.showErrors.

  3. Use Promise.all for parallel independent calls. Use sequential await only when calls depend on each other.

  4. The promise has an .abort() method. Use it for type-ahead and similar patterns, or use the queue option for the same effect.

The existing tutorials are not wrong. They just don't go far enough. The patterns above are the ones I find make the difference between APEX JavaScript that feels snappy and JavaScript that feels like it is fighting the framework.

Next
Next

Bulk Collect and FORALL: Correctness, Sparse Collections, and Knowing When Not To - Part II