Night Shen-How to build a front-end R&D platform with scalable processes and high productivity

Night Shen-How to build a front-end R&D platform with scalable processes and high productivity

The front-end early chat conference, a new starting point for front-end growth, was jointly held with the Nuggets. Add WeChat codingdreamer to join the exclusive peripheral group of the conference and win at a new starting line.


Twenty-fifth | Front-end debriefing session, promotion/debriefing/reporting/year-end summary-the front-end must pass each year, 5-9 full-day live broadcast, 7 lecturers (ant/youzan/byte/startup company, etc.) ), click me to get on the car (Registration address):

Keywords are:

  • How to package yourself in front-end debriefing?
  • Why can't the front-end promotion get good results?
  • What output should be paid attention to in front-end promotion at work?
  • Is there a better work reporting methodology for the front-end?

All previous issues have been recorded and broadcast throughout, and you can purchase annual tickets to unlock them all at once

More events


The text is as follows

This article is the twenty-third session of the front-end CI/CD special session, and it is also the 177th session of the early chat, shared by Ali Hema-Ye Shen.

Write in front

This article is a text finishing draft of 4.10 Front End Early Chat-Front End Engaged in CI/CD Special Sharing.

Since I entered Ali in 2015, I have been exploring the field of front-end engineering and have always been very interested. Previously led the JUST engineering system in B2B, and later came to Hema, responsible for front-end engineering and basic experience-related work. This sharing of the ReX Dev R&D platform can be regarded as a summary of the current Hema engineering system.

This sharing is mainly about design thinking related to platform architecture, and there will not be too many parts related to product functions. There are two considerations: one is that the platform is being iterated at a high speed, and some product capabilities are still being polished; On the other hand, the business scenarios of different teams are different, and the demands for engineering will be quite different. The sharing is of little significance.

In this article, I will introduce as much as possible about ReX Dev's thinking about some architectural issues during the design process, and I hope it will be helpful to you.

Background introduction

The status quo of Hema middle and backstage

Our team is the front-end team of Hema B-side, mainly responsible for Hema's middle and back-office related business. Before talking about the specific architecture, let's take a look at the status quo of Hema's mid- and back-end technology.

As can be seen from the above figure, the Hema Workbench is the core system of the middle and back-office. The small two users access the Hema Workbench through different terminals. Under this workbench, there are dozens of suppliers to stores, goods to logistics, etc Vertical lines of business.

Middle and backstage engineering features

From the perspective of front-end engineering, the middle and back-end scenes of Hema mainly have the following characteristics:

  • Large page base : There are 6000+ stock pages and 3000+ active maintenance pages (within half a year). This leads to a huge amount of concurrent development on the front-end side . The development efficiency requirements for new and old applications are very high . We need to consider building very efficient R&D platform;
  • Page discretization and modularization : As Hema has many vertical business lines and complex business formats, behind this large number of pages is the discretization of the pages. Of course, they also show the characteristics of modularization. The modularized pages give us the opportunity to try. Technologies such as LowCode/NoCode ;
  • There are many types of applications : Due to some historical and special scenarios, Hema has a variety of front-end application research and development, each of which has its own suitable scenarios, such as micro-applications, multi-page applications, monolithic applications, etc., and consider the future The business changes of the company are likely to have new application forms, so there is a higher demand for scalability in the research and development process .

In order to respond to the above engineering demands, we need a set of highly productive and scalable R&D platforms.

Since Hema s existing R&D process is customized based on the original B2B just-flow system, the current platform s maintainability and stability have major problems, and it also involves the issue of platform ownership, and it is not suitable for the Hema scene to be larger. Scale reconstruction, so we decided to build Hema's own R&D platform ReX Dev.

Product positioning and evolution strategy

ReX Dev's platform is positioned as a high-productivity research and development platform for Hema front-end applications . This means that this platform only targets Hema (and related business) services, only considers front-end applications, and requires high productivity. Determining the positioning can clarify the capability boundaries of the platform and focus on the core problems that the product must solve.

As a platform serving front-end application developers, we can t just consider efficiency improvements, but also the evolution of research and development models. As far as Hema is concerned, there are as many as 3000+ active maintenance pages (still growing). At present, the main maintenance method is that the front-end participates in most of the research and development, that is, the front-end is directly supported as a business resource.

However, this will have the following problems:

  • In the process of continuous business development, there is a large amount of technical debt repayment, and the demand for front-end resources has only increased. The simple resource-based support method has been unable to meet business development;
  • It is difficult for a large number of modularized pages to bring about special technical challenges. Most people just write modularized pages repeatedly, and their sense of value and sense of accomplishment for business support students are low, which also leads to the loss of talents.

Therefore, we need to promote changes in the research and development model.

Hema's business form focus, page modularization, and large number of pages in the middle and back-end scenes have brought relatively large opportunities for LowCode/NoCode. Once R&D evolves from ProCode to LowCode/NoCode, the delivery cost of the application can be reduced to a certain extent, we can let those modular, specific vertical scenarios of business to non-professional front-end developers to maintain, such as outsourcing, back-end, Even non-development roles.

In the future, the evolution goal of our products is to change the Hema front-end team from [resource-based front-end] to [service-based front-end] by promoting changes in the research and development model, from teaching people how to fish to teaching people how to fish :

Technology layered architecture

To achieve the above goals, the ReX Dev platform needs to continue to evolve and iterate, which requires a robust architecture support. Specifically, ReX Dev adopts the following layered architecture:

When designing the above architecture, the following points were mainly considered:

  1. The bottom layer of the platform must be robust enough , and the applied meta-information model can adapt to future business development, and does not restrict the development of the upper-layer business due to the underlying model. At the same time, the R&D process needs pipeline support and a powerful and flexible pipeline engine;
  2. Considering that the business is constantly changing, our R&D process must be flexible enough to meet the expansion needs of different scenarios, and support quickly even when new application forms appear in the future;
  3. The R&D process can only solve the collaborative efficiency, but must improve the R&D efficiency and promote the transformation of the R&D model. Therefore, the upper layer of the R&D platform needs to provide more efficient LowCode R&D capabilities .

In summary, it is a stable bottom layer, a flexible middle layer, and an efficient upper layer . Next, we will start from these three layers separately.

Robust bottom layer

The bottom layer of the R&D platform is the core of the R&D platform. If we look at a minimalist CI/CD model, it is actually composed of two parts: application and iterative loop. The core capabilities of the R&D platform exist to satisfy these two parts, so we have two major goals:

For applications, there must be a robust application meta-information model; for iterative cycles, there must be flexible process scheduling capabilities.

Meta-information architecture design

As shown in the figure below, in order to facilitate management, we generally abstract the application meta-information according to the three layers of domain/product/app (different platforms may have different names but similar structures):

In traditional design, domain+product+app are generally uniquely constrained. This is a more rigorous approach. We also designed it in this way in the early days.

However, after the actual business landing, it was found that this unique constraint would bring about greater flexibility issues: in the continuous development of Hema s business, the product line will continue to change, and this unique constraint It is indirectly dependent on related engineering schemes, resulting in very high costs when switching product lines later, and even data corrections.

Therefore, on the basis of this information architecture, we let the app only form a dependency with the domain, and build a unique constraint, while the relationship with the product is weakly dependent (product is flexible and variable):

Based on this relationship, we can transfer applications to other product lines with one click without any impact on application development.

Although the design of the meta-information model seems very simple, because it belongs to the basic data structure, it will be relied on by the upper layer. If the design is not well considered, the cost of reconstruction will rise sharply, so it must be considered carefully. I suggest that when designing, we must consider the future and think about the changes and the unchanging parts in the future .

Someone may ask here, why is the app not directly globally unique, but still depends on the domain? This is mainly a comprehensive consideration of the possibility of change and the cost of conflict:

  1. For services such as Hema, the number of apps is very large, and apps under different domains are very likely to conflict with the same name. Global uniqueness will only cause users to add prefixes to the app's name, and the management and understanding costs will be very high;
  2. Generally speaking, cross-domain application migration is rare. It is rare that an app is responsible for multiple domains at the same time, and the domain is relatively stable. Therefore, the comprehensive management cost of the constraint of app+domain is the lowest.

Pipeline engine design

In addition to meta-information, another important part of the underlying architecture is the pipeline scheduling engine. As part of the bottom layer, we hope to design a generalized pipeline scheduling engine that can:

  • Realize the scheduling and execution of any asynchronous task, support execution node state sharing, input and output pipeline
  • Support flexible task process customization, tasks can be executed serially and in parallel
  • Support full link abnormal capture, task retry and scheduling recovery, real-time task log

The following is a typical iterative process:

How should the engine be designed? Let's look at its key design separately below.

Pipeline state machine

As can be seen from the above figure, the process is essentially a directed graph, and the flow of the process is essentially a finite state machine. Of course, different from the traditional state machine model, the pipeline execution node itself is asynchronous, and the asynchronous node itself has multiple states, which makes the state machine model of the pipeline more like a fractal structure: a state node can be disassembled Divided into several sub-states.

The following is a simple abstraction of the state node, which is like a Promise and contains 3 states. Define the pipeline node as stages and the execution action as actions, then the execution engine is the run() function:

As you can see, the execution engine logic is very simple: read the current state, execute the corresponding action, get the next state, and then execute the same logic recursively.

Context and pipeline

Through the state machine model, we can execute the pipeline. In the real world, the nodes of the pipeline are executed in a distributed manner. They are often independent and are likely to be executed in different machines and different processes. Therefore, it is necessary to solve the communication problem of the data state between each node.

In order to facilitate communication, we need to provide two models: context state sharing and pipeline support .

  • Context can solve the data sharing problem of multiple nodes and is suitable for persistent storage of data;
  • The pipeline solves the communication problem between two adjacent nodes. The output of the previous node is the input of the current node;

The following figure provides the context of a state node and the flow of pipeline data:

In the code on the right, a state node will receive context and payload as parameters, and will return context and payload after processing. Its outer code is similar to the following structure:

async function runStageAction ( pipeline, stage, payload ) { //read context const context = await pipeline.readContext(); //run action const result = await stage.runCurrentAction(context, payload); //write context await pipeline.saveContext(result.context); //schedule next action stage.scheduleNext(result.payload); } Copy code

In the last step, the scheduler will trigger the execution of the next stage action, and the parameters passed in are the output of the current action.

Exception automatic recovery

Since the pipeline engine is the core module of the pipeline service, in order to ensure stability in design, the pipeline engine itself will only do task scheduling, and the specific execution will be dispatched to the corresponding execution service. Therefore, the abnormality of the pipeline will be divided into two situations:

  • Exceptions in the execution of pipeline nodes, this will be more common, usually due to execution logic errors or problems with third-party dependencies;
  • The pipeline scheduling is abnormal, which is generally rare, and only occurs in scenarios such as server failures and restarts.

For the former, we designed the stageError function in the pipeline node, which will be called when the node has an error, which allows the process developer to determine the error handling logic by himself.

For scheduling exceptions, by introducing the DTS timing task mechanism, the tasks that failed to be scheduled are checked regularly, and the process can be re-scheduled when problems occur.

Process real-time log

Since the execution of pipeline nodes is often performed asynchronously in the background, it will be very difficult to troubleshoot abnormal execution of nodes. Therefore, we need to record detailed logs of node execution. Some nodes take a long time to execute and require real-time log display.

We simply encapsulate the real-time log based on the original just, and perform log records before and after the node is executed:

async function execute ( stage, action, context, payload ) { const logger = createRealtimeLogger(); const executor = loadExecutor(logger); //start log await logger.start(); //execute stage action const result = await executor.execute(stage, context, payload); //end log await logger.end(); return result; } Copy code

Calling this.logger in execute and writing the log stream to the logger will synchronize the log to the log service in real time.

Pipeline engine architecture

Based on the above design, we can summarize the entire pipeline engine architecture into the following figure:

The main points in the figure above are as follows:

  1. The pipeline schema defines the state machine model of the pipeline, which is a directed graph;
  2. The pipeline engine executes the input stage instance tasks based on the pipeline schema, which is a cycle of reading, executing, and scheduling;
  3. The specific execution of the execute function is executed by the Task Executor service, and the real-time log output is performed through the realtime log service before and after the execute;
  4. The pipeline engine will continue to write the corresponding context and I/O data to the pipeline & stage instance during execution for data persistence;
  5. The scheduler scheduler performs distributed scheduling of processes through metaq/http and other methods, and checks and automatically recovers through DTS.

Production line

The above is only the implementation of the pipeline scheduling engine architecture. Based on this engine, we can achieve many upper-level capabilities through different levels of product packaging, such as:

Currently, we have not done High Level encapsulation, only the Low Level form provides process scheduling support for the following scenarios:

Flexible mid-tier

With a robust meta-information architecture and a powerful pipeline scheduling engine, we can build various application development processes on top of this. This is the middle-level responsibility: to provide flexible and low-cost CI/CD processes in different scenarios The ability to customize .

The following figure shows the current iterative process of different applications of Hema:

At the moment, we have three application types to support. At the same time, considering the access of new application types in the future, what architecture should be used to support it?

Process abstraction: the basis for customization

To make the process extensible, the process must be abstracted first. For an iteration, it can be roughly divided into the following stages:

  • Create iteration : mainly create iteration instances and development branches, all processes are basically the same;
  • Configuration iteration : This step is for the iteration to initialize resources before the formal development, such as environment configuration, code configuration, etc., different application types may be different;
  • Development iteration : Including actual coding, debugging and related card checking, etc., different application types will be quite different, and both online and offline;
  • Deployment iteration : Deploy the developed code to the corresponding environment for joint debugging, grayscale, official release, etc. The differences in different application processes will also be relatively large;
  • Online iteration : After completing the formal deployment, update the status of the iteration, this step basically does not need to be customized;

Therefore, an iterative R&D process can be abstracted into the following form:

Based on this abstraction, the customized solutions for various application processes are as follows:

In the process customization scheme in the above figure, there is a customization package for each application type, which is an extension and realization of the abstract process. Each customized package contains the following parts:

  • Builder: Responsible for compiling the source code of the application into a product to be released, which will be installed as a dependency of the application;
  • WebUI: the corresponding operation interface of the application in the research and development process, each application can customize its own UI, and the operation efficiency will be higher;
  • Deployment process: the CI/CD deployment process of the application, they are customized based on the pipeline engine;
  • Application services: the basic services and internal logic encapsulation of applications, such as the creation services of micro-applications and monolithic applications are different;
  • Process card point: the check card point of the application research and development process, such as CR, network closure, Lint, security inspection, etc.;

In the above customization, the builder part is very simple. It is essentially the packaging of different build tools (such as various *packs), so it will not be explained separately; the deployment process is customized based on the previous pipeline scheduling engine, which is naturally extensible Yes, I won t repeat it here.

So the next extension processes we mainly focus on application services , WebUI, processes card points to introduce extensions.

Service Expansion Foundation: SPI

For service expansion, ideally, non-intrusive execution logic replacement should be achieved . There are many ways to do this kind of ability, but for scenarios such as the R&D process, you can consider externalizing the service realization logic, which is realized by a three-party system, and the process itself only calls the service. It is the best way to realize this mode. The solution is SPI.

Wikipedia on Is defined as:

Service provider interface (SPI) is an API intended to be implemented or extended by a third party. It can be used to enable framework extension and replaceable components.

In the traditional API mode, the Server side is responsible for interface definition and service implementation. In the SPI mode, the Server side is only responsible for interface definition and interface invocation. The service implementation is provided by the third-party service, as shown in the following figure:

This kind of paradigm actually exists in JS language. You can think of SPI as a kind of callback. In the specific practice of the ReX Dev platform, taking micro-application service extension as an example, its service extension architecture is as follows:

In the picture above:

  • ReX Dev is responsible for the node abstraction and interface definition of the R&D process, including application creation, iteration creation, configuration, deployment, etc.;
  • The micro-application extension package is packaged as an independent FaaS application, and provides the implementation of related SPI based on a unified egg-plugin;
  • The meta-information of the micro-application service will be registered with the SPI Registry of ReX Dev, and will be called by the SPI Invoker when the specific node is executed.

In this way, no matter how many processes ReX Dev needs to expand, its core architecture and service stability will not be affected.

WebUI extension

Similar to the service-side extension SPI, the extension architecture of WebUI is similar. Essentially, the basic WebUI framework provides expansion slots , and the specific application process provides expansion modules . We call these expansion components that provide specific functions as FPC (Feature Provider Component):

This design solution has already been practiced in the previous B2B engineering platforms JUST Flow and JUST WebUI, and has proven to be a relatively flexible solution for UI expansion.

Process stuck point expansion

Common R&D process card points include CodeReview, Lint, security inspection, network closure, testing, etc. These card points have a common feature, namely: card point logic is generally implemented by a three-party system, and card point triggering and inspection are performed by the R&D process. responsible for.

In this case, any process may perform a card point check on a third-party service. To avoid repeated implementation of card point logic, we need a common card point model so that the three-party system can be quickly packaged, and the R&D process can also be accessed at low cost. Into. We need to make the following abstractions:

  • Unified card point model, including data model and card point interface, all three-party systems are packaged according to the same interface;
  • Define standard card point events, all card points are only bound to standard card point events, such as gitcommit, build;
  • Provide an event trigger SDK to allow the R&D process to trigger standard events at the right time.

Based on this abstraction, the card point expansion scheme is as follows:

In the above figure, all stuck tasks are encapsulated based on BaseTask. There are run() and callback() methods. Each stuck point will be registered in the unified Task Pool. When the R&D process triggers a standard event, it will start from the Task Find the matching Task in the Pool and execute it, and the Task instance will be associated with the current process.

In addition, each stuck task instance often needs to have UI operation behavior (such as CodeReview submission, network blocking application), so each stuck task will have a corresponding UI module to achieve this, which is achieved through the FPC mentioned in the previous section. can be realised.

Thinking about process customization

In this chapter, we introduce how ReX Dev can achieve different process extensions through reasonable architecture design without affecting its own stability. Generally speaking, this type of expansion should only be used when there are major differences in the R&D model. After all, the new process itself has some development workload, but also has the risk of long tail and fragmentation, which brings structural governance issues.

In addition to process customization, the following alternatives can be considered:

Efficient upper layer

The R&D process covers the entire life cycle of the project, and the most critical part of it is coding, which is also the core part of a project development process. To improve efficiency at the upper level, the key is how to improve coding efficiency.

In the Hema business scenario, the modular and lightweight application system makes low-code or even no-code development possible. Let s analyze the applicable people and scenarios of different R&D models:

From ProCode -> LowCode -> NoCode, the applicable scenarios are getting narrower and narrower, and the improvement in R&D efficiency brought by it will be more obvious. When the code is getting less and less, the card point check that was previously attached to ProCode may not be needed, such as Lint, CodeReview, etc., which will further promote development efficiency.

Under the traditional single development mode, the technical solutions of NoCode/LowCode/ProCode are generally implemented independently, which leads to the single mode dilemma: the early business can be quickly implemented based on LowCode/NoCode, but the later iterations of requirements lead to increased page complexity, resulting in The LowCode/NoCode platform is difficult to support. In the end, it is either to implement the application based on ProCode, or to continuously add functions to the LowCode/NoCode platform.

LowCode/NoCode platform functions are not as many as possible, because they are aimed at non-professional front-end developers. Its advantage is simplicity. Expanded functions will bring complexity and damage the development experience of original users. It is very important to clarify the positioning and capability circle of a LowCode/NoCode platform. Keep it simple enough to make it efficient enough in specific scenarios.

Hema's Choice: Progressive R&D

For Hema, it is determined to transform the Hema front-end team from resource-based support to service-based front-end team. Senior front-end developers need to focus on ProCode development in complex scenarios, and then use LowCode/NoCode to outsource and back-end , Non-technical personnel to complete application delivery. We hope that an application can be supported in the way of NoCode -> LowCode -> ProCode. If one mode cannot be satisfied, it will be downgraded to the next mode. We call it a progressive research and development mode .

The biggest advantage of the incremental R&D model is that through the graceful degradation of different R&D models, the R&D model can be kept simple in the scope of application, and the expansion of platform functions and the increase in complexity can be avoided.

The following figure shows the transformation of Hema's progressive R&D model and the packaging logic of the application framework:

In the picture above:

  • NoCode can be downgraded to LowCode, LowCode can be downgraded to ProCode, and limited ProCode can be reversibly converted to LowCode;
  • The bottom layer of all R&D modes is realized by continuous packaging based on the same application framework. LowCode is based on ProCode packaging, and NoCode is based on LowCode mode packaging;

Due to space limitations, we will focus on the downgrade and reverse conversion logic of ProCode -> LowCode.

LowCode/ProCode conversion solution

Within the Alibaba Group, the most popular LowCode solutions are based on the Schema LowCode model. Schema LowCode means that the bottom layer of LowCode visualization is implemented based on a set of schema or DSL, and the UI is edited by manipulating the schema.

Hema chose JSX-AST LowCode mode. JSX itself provides a set of XML-style declarative syntax. We manipulate the compiled AST of JSX to implement UI editing. The biggest advantage over schema is that it can achieve 100% reverse conversion.

The specific comparison is shown in the figure below:

There are several main reasons why Hemazhi made the following choices:

  • Hema's application interaction is relatively modular, and JSX can be modularized at the application layer;
  • Incremental R&D is our core philosophy, we will be more inclined to the entire R&D model that can be gracefully degraded;
  • In the field of JSX-AST, we have enough early-stage technology accumulation and relatively mature solutions;

JSX-AST LowCode implementation mechanism

Based on the LowCode of JSX-AST, a key problem needs to be solved: find the AST node behind a UI element and apply AST Patch to it to achieve the effect of instant editing.

The specific principle is not complicated. The general conversion process is as follows:

In the editing mode, JSX will be processed by a specific babel-plugin when it is compiled into an AST, and specific tags will be added to JSX elements (recording AST node, source code location, etc.), and when operating the UI, the target AST will be found based on these tags Node and replace the original AST with the generated Patch, and then after recompiling and rendering, the modified UI will take effect in real time, and the AST will also write back to generate JSX Code.

The above is the ProCode/LowCode scheme of reversible conversion.

Application layer constraints and degradation processing

After understanding the principle of JSX-AST, you may be curious, JSX syntax is actually very flexible, which will cause the actual AST structure to be very complicated, and finally make it difficult to edit the UI visually. How does Hema deal with it?

Indeed, to implement LowCode based on JSX-AST, this problem must be faced. In fact, there are two options to solve this problem:

  • One is to implement strong constraints on the UI by defining a set of JSX-like DSLs. For example, writing JS conditional expressions and loop statements in JSX is not supported. There are many solutions within Ali Group;
  • The other is to not extend JSX and still maintain the original writing method, but avoid JSX being too free through the constraints of specific application layer writing;

Although it is not complicated to define a DSL, it is a dialect after all. Once a dialect is introduced, a full set of engineering support (IDE plug-in, babel plug-in, various engineering support, etc.) is required, and it is also very unfriendly to third-party output. Considering the current situation of Hema, we believe that the DSL solution is too heavy and not suitable for Hema scenes.

In the end, we adopted option two, and the constraints on JSX are written as follows:

//@lowcode: 1.0 # Indicates that low code support is enabled, and will be checked in [strict mode] //module reference Import React from 'REACT' ; Import styled from 'Components-styled' ; Import {the If, the ForEach, Observer} from '@ alilfe/Hippo-App' ; Import {Layout, the SearchForm, the Table} from '@alife/hippo' ; import Model from './model' ; //Constant definition const {Page, Content, Header, Section} = Layout; const {Item} = SearchForm; //Style definition const StyleContainer = styled.div ` height: 100%; ` ; //View definition (the name is fixed as View, with fixed parameters: $page/$model) function View ( {$page, $model} ) { return ( < StyleContainer > < Page page = {$page} > < Header > {/* If conditional expression: if value =? Then? */} < If value = {!$model.loading} > < HeaderDetail model = {$model.detail}/> </If > </Header > < Content > < Section > < SearchForm model = {$model.section1.form} onSearch = {$model.search} > < Item name = "p1" title = "Condition 1" component = "input"/> <Item name ="p2" title = "Condition 2" component = "input"/> < Item name = "p3" title = "Condition 3" component = "input"/> </SearchForm > </Section > < Section > {/* ForEach loop component: Iterate on < Item/> through FaCC*/} < ForEach items = {$model.data.list} keyName = "key" > {($item, i) => ( < div className = "list-item" > < div className = "header" > < div className = "title" > {$item.title} </div > < div className = "extra" > < Button onClick = {( v) => $model.show(v, $item))>Load </Button > </div > </div > </div > )} </ForEach > < Pagination current = {$model.page.pageNo} onChange = {$model.changePage} /> </Section > </Content > </Page > </StyleContainer > ) } //Export the view export default observer(Model)(View); copy the code

In addition to JSX, the application layer needs to be constrained, which is embodied in three aspects:

In summary, it is:

  • The application structure must be strongly constrained . An overly free structure will bring a very large management and analysis cost to LowCode;
  • Applicable scenarios should be constrained. We do not provide free build capabilities, but only provide rapid build for high-frequency scenarios to ensure the convergence of the UI structure;
  • The strict mode is adopted to impose strong constraints on the code style and writing method, and only applications that conform to the strict mode can carry out the LowCode construction mode;

Through the above constraints, the application can be made very standardized, which can greatly reduce the implementation complexity behind the JSX-AST LowCode mode; at the same time, if users need to develop with ProCode, they only need to follow this set of specifications, and the written code can still be used. Built by LowCode.

Unified Node/Web Construction Plan

To achieve the integration and development of multiple R&D models, in addition to the unification of the application layer, it is also necessary to maintain consistency in the engineering scheme, that is, whether it is LowCode or ProCode, the underlying construction mechanism should be unified, so as to ensure.

Since LowCode build products are based on the Web, the Web side needs to provide a set of construction solutions consistent with the Node side. At present, the builder on the ProCode side is still implemented based on webpack. After a system of research and solution selection, we still adopt a web-side construction solution based on webpack.

details as follows:

In the above builder design scheme:

  • ProCode and LowCode follow a similar construction model, that is, read JS/CSS from the target location (disk/memory), and generate target code after construction;
  • By compiling webpack and related plug-ins and configurations into bundles, combined with Nodebowl Runtime to run webpack on the web side;
  • Through the pre-built solution of dependencies, the remote compilation and loading of dependencies are realized, so that the web side can add and delete dependencies as freely as local development;

LowCode/ProCode integrated research and development summary

Finally, summarize the LowCode/ProCode integrated research and development plan:

  • Choose the most suitable solution : There is no best solution, only the most suitable one. The LowCode/ProCode conversion solution based on JSX-AST is more in line with our concept of progressive research and development. Maybe your scenario Schema mode is enough;
  • Determine a clear boundary : LowCode's efficiency lies in business development in specific subdivisions. ProCode is a more efficient choice for scenarios that do not conform to LowCode, and it is also conducive to keeping the platform simple;
  • Avoid building applications from scratch: Avoid allowing users to build applications from scratch, but start with "semi-finished products" such as templates, scaffolding, and data models (just like the "quick hand dishes" sold by Hema, the semi-finished products make one who can t cook Xiaobai can also make a dish).

Thinking and summary

In order to pursue the universality of technology, this sharing focuses on the product thinking and technical implementation behind the ReX Dev R&D platform. There are not many introductions to product functions, because it does not make much sense to share how the Hema application is created and how to deploy. How to implement the deployment process is the key.

If a worker wants to do well, he must first sharpen his tools. I personally think that the team needs to consider how to improve the overall R&D efficiency of the team while supporting the business at any time. This kind of basic investment lies not in the amount of resources, but in continuous and stable investment . Engineering, like software architecture, requires continuous evolution, and continuous optimization for the future.

Because front-end engineering is a complex and systematic work, as an architect, when designing an engineering system, he needs to consider long-term for the future. If it is an architecture problem identified currently, it must be solved early to avoid the problem of architecture. Design problems leave a hole for future people, which I have deeply experienced when doing front-end engineering in recent years.

Based on my own experience, when doing front-end engineering architecture design, there are the following principles:

  1. Do a good job of layering design , whether your product is a family bucket or a free combination, the layering of the core architecture must be done well. The core of layered design is to put the stable part at the bottom. It should be able to cope with the changes in the future business in 3 to 5 years. The upper-level solution needs to focus enough to solve the problem of specific scenarios.
  2. Embrace the trend of the community . Unless there are sufficient resources and more advanced concepts, the engineering plan is best based on the encapsulation or transformation of the community plan. The trend of the community determines the future direction. The biggest problem with making your own wheels is that there may be no maintenance behind. Finally it becomes the technical debt of the team.
  3. Convergence in products and flexible architecture. Front-end engineering is very taboo against fragmentation, emphasizing the unity and convergence of solutions, but at the same time, flexibility must be taken into account. Therefore, the architecture design cannot be overly designed and constrained based on a preset scenario. It is to maintain a certain degree of flexibility on the bottom layer and place constraints on the product functions of the top layer. In the face of demand, it should be done: "all support" in the underlying architecture, but "choose not to support" in the product .

Specific to the ReX Dev platform, its layered design is consistent throughout, such as the product layering of the entire R&D platform, the design layering of the assembly line engine to the layering of the R&D model; at the same time, when choosing the ProCode/LowCode conversion solution, we discarded In addition to the DSL solution, the use of native JSX also takes into account the maintainability of the future DSL; we provide flexible process extension support in the architecture, but in the specific process design, our first principle is actually to converge and avoid Fragmentation.

Front-end engineering is a technical field with very scene-oriented features. Different front-end teams have different technical forms, resulting in huge differences in the engineering solutions behind them. The unified engineering platform is essentially a unified and convergent technical form. As far as Alibaba Group is concerned, there are a large number of internal engineering platforms, often with a set of different BUs. This is caused by the differences in the technical form of each BU.

As an architect in the engineering field, whether to build a self-built R&D platform or expand customization based on the existing platform requires you to think carefully about the current situation of the team, future development, and input-output ratio.

Finally, according to the rules of early chat, I recommend a book "The Way to Clean Architecture":


Don t forget the twenty-fifth session of 5-9 | Front-end debriefing session, promotion/debriefing/reporting/year-end summary-the front-end must pass each year, 7 lecturers (ant/youzan/byte/startup company, etc.) ), click me to get on the car (Registration address):

All previous issues have been recorded and broadcast throughout, and you can purchase annual tickets to unlock them all at once

More events


Don't forget to like the article