3.1 System model
This paper analyzes the corresponding resource allocation scheme based on a vehicle cloud collaborative edge cache model as the network model. The specific vehicle network model is shown in Fig. 1. In this model, there are L RSUs deployed around the road, denoted as ℒ = = {ℳ1, ℳ2, ℳ3, ⋯, ℳL}, and each RSU is equipped with an MEC server. The Poisson distribution is suitable for describing the number of random events in unit time (or space). Therefore, it is assumed that N vehicles on the road have a Poisson distribution [27], which is expressed as \( \mathbf{\mathcal{V}}=\left\{{v}_1,{v}_2,{v}_3,\cdots, {v}_N\right\} \). Since both MEC server and neighboring vehicles have computing and caching capabilities, they are collectively referred to as service nodes \( \mathbf{\mathcal{W}}=\left\{{w}_1,{w}_2,{w}_3,\cdots, {w}_M\right\} \). n vehicles are randomly distributed within the coverage area of each RSU, that is, the set of vehicles within the coverage area of RSU or the service area of ℳj is \( {\mathbf{\mathcal{V}}}_j=\left\{{v}_1,{v}_2,\cdots, {v}_n\right\} \). The vehicle 802.11p OBU has an 802.11p network interface and a cellular network interface. Vehicles can offload tasks to MEC servers for calculation by RSU, or offload to neighboring vehicles for V2V communication. In order to effectively reuse spectrum, V2I mode and V2V mode work in the same frequency band. The spectrum is evenly divided into K sub-channels, denoted as \( \mathbf{\mathcal{K}}==\left\{1,2,3,\cdots, K\right\} \), and the bandwidth of each sub-channel is B Hz. The vehicle offloading strategy set is expressed as \( \mathbf{\mathcal{A}}==\left\{{a}_1,{a}_2,{a}_3,\cdots, {a}_N\right\} \), if ai = 1, it means vi, and the task is offloaded to service nodes for calculation. If ai = 0, it means that vi will perform computing tasks locally. Assume that at t, there are some tasks in buffer pool. When vehicles have a task request, if the task is cached on service nodes, service nodes inform vehicles that the task exists on service nodes. When the calculation of service nodes is completed, it is directly sent back to vehicles. In this way, the vehicle does not need to perform task offloading operations, which can effectively reduce the energy consumption of mobile devices and the delay of task offloading. If there is no cache for requested tasks on service nodes, the vehicle needs to make an offloading decision and further resource allocation. When the service node completes requested tasks for the first calculation, it considers the cache decision. The cache strategy set of service nodes wm is denoted as \( {\mathbf{\mathcal{G}}}_m==\left\{{g}_{m,1},{g}_{m,2},{g}_{m,3},\cdots, {g}_{m,n1}\right\} \). If gm, n1 = 1, it means that service node wm will cache computing task n1. This allows the next request to reduce network transmission and reduce calculation delay. The cache collection of all service nodes is denoted as \( \mathbf{\mathcal{AG}}==\left\{{\mathbf{\mathcal{G}}}_1,{\mathbf{\mathcal{G}}}_2,{\mathbf{\mathcal{G}}}_3,\cdots, {\mathbf{\mathcal{G}}}_M\right\} \).
3.2 Computing model
Based on the system model built above, it is assumed that each task requesting vehicle has a computing task \( \mathbf{\mathcal{Z}}=\left\{{d}_i,{s}_i,{t}_i^{\mathrm{max}}\right\} \), i ∈ N to be processed. Where di represents the input size of task \( {\mathbf{\mathcal{Z}}}_i \). si represents the number of CPU cycles required to complete computing task \( {\mathbf{\mathcal{Z}}}_i \). \( {t}_i^{\mathrm{max}} \) is the maximum delay that computing task \( {\mathbf{\mathcal{Z}}}_i \) can tolerate. The vehicle can offload tasks to MEC servers for calculation by RSU, or offload to neighboring vehicles for processing, or execute on local vehicles.
For offloading computing, when the limited computing power of vehicle itself is not enough to support the time delay requirement of tasks, the task needs to be offloaded to service nodes for calculation. The task processing process will inevitably bring time delay and energy consumption. Since the data volume of processing results returned is small, the delay and energy consumption of return process are ignored, and only the upload delay, calculation delay and transmission energy consumption are considered [28, 29].
In this paper, the task request vehicle to offload tasks to service node wj calculation process is defined as the weighted combination of delay and energy consumption, expressed as:
$$ {u}_i^{off}=\alpha {t}_i^{off}+\beta {e}_i^{off} $$
(1)
where α and β respectively represent the weighting factors of non-negative delay and energy consumption, and satisfy α + β ≤ 1. \( {t}_i^{off}=\frac{d_i}{r_{i,j}}+\frac{s_i}{f_j^i} \) represents the sum of offloading delay and calculation delay. \( {f}_j^i \) represents the computing resources allocated by service node wj to vehicle vi. \( {e}_i^{off}={p}_i\frac{d_i}{r_{i,j}} \) represents the energy consumption of transmission process.
For local calculations, suppose that the computing power of vehicle vi is \( {F}_i^l \), and the computing power of different vehicles is different. When vehicle task \( {\mathbf{\mathcal{Z}}}_i \) is calculated locally, the cost that vehicle vi needs to bear is:
$$ {u}_i^l=\alpha {t}_i^l+\beta {e}_i^l $$
(2)
where \( {t}_i^l=\frac{s_i}{F_i^l} \) is the time delay required for calculation. \( {e}_i^l=\varphi {s}_i{\left({F}_i^l\right)}^2 \) represents the energy consumption to perform tasks. φ is the power coefficient of energy consumed per CPU cycle [30].
3.3 Communication model
When the traditional orthogonal multiple access technology is applied in MEC system, each terminal user has a one-to-one corresponding transmission channel to ensure stable signal transmission. The delay \( {T}_v^{OMA} \) in completing task offloading in this scenario is expressed as follows:
$$ {T}^{OMA}=\frac{S_v}{B\log \left(1+\frac{p_v^{OMA}{\left|{h}_v\right|}^2}{p_v}\right)} $$
(3)
where \( {p}_v^{OMA} \) represents the transmission power of user v. hv represents the channel gain between users and edge servers. pv represents the noise interference power of users. B represents the channel transmission bandwidth of users. Thus, the total time delay TOMA to complete the offloading of all vehicle users is expressed as:
$$ {T}^{OMA}=\sum \limits_{v=1}^V{T}_v^{OMA} $$
(4)
In a communication network based on hybrid NOMA-MEC, this system can allow multiple vehicle users to complete task transmission and offloading in the same time slot or frequency band. Suppose there are two car network users m and n requesting task offloading at the same time, Dn ≥ Dm, m, n ∈ {1, 2, …, v}. Thus, in this mode, users m and n can simultaneously offload tasks to MEC servers in time slot Dm. The transmission power of vehicle users m and n are \( {p}_m^{OMA} \) and \( {p}_n^{OMA} \) respectively. It should be pointed out here that if the information of user m is decoded in the second stage of serial interference cancellation, the performance of user m is same as OMA. Therefore, the transmission delay of user m will not be affected [31]. The expression of user n transmission rate Rn in time slot Dm is:
$$ {R}_n\le B\log \left(1+\frac{p_{nm}^{NOMA}{\left|{h}_n\right|}^2}{p_m^{OMA}{\left|{h}_m\right|}^2+{p}_v}\right) $$
(5)
where \( {p}_{nm}^{NOMA} \) represents the transmission power of vehicle user n in time slot Dm. hm and hn represent the channel gains of vehicle users m and n respectively.
The task offloading of end users by NOMA will generate more energy consumption than OMA mode [32]. Therefore, this paper uses a hybrid NOMA-MEC method to offload the tasks requested by mobile terminal users. The specific steps are: firstly, user m and user n perform task offloading at the same time within time Dm. Secondly, after user m completes task offloading, user n needs to continue the task offloading in OMA manner. It takes \( {T}_n^{re} \) to complete the offloading of this part of tasks, so total time delay Tn of vehicle user n is:
$$ {T}_n={D}_m+\frac{S_n-{R}_n{D}_m}{B\log \left(1+\frac{p_{nn}^{NOMA}}{p_v}{\left|{h}_n\right|}^2\right)} $$
(6)
where \( {p}_{nn}^{NOMA} \) represents the transmission power offloaded by vehicle user n in the second part. The time delay Tm of actual offloading for vehicle user m is expressed as:
$$ {\displaystyle \begin{array}{l}{T}_m=\frac{S_m}{B\log \left(1+\frac{p_m^{OMA}{\left|{h}_m\right|}^2}{p_v}\right)}\\ {}\kern3em s.t.{T}_m\le {D}_m\end{array}} $$
(7)
3.4 Problem description
When a smart vehicle requests a task calculation, it first checks whether there is a content cache in its own buffer pool. If the content is available locally, there is no need to post a task request. Otherwise, scan the surrounding service node to see if there is a content cache, and if it exists, it will be returned after the service node calculation is completed. If it does not exist, you need to consider whether to offload.
After the task is offloaded to service nodes and the calculation is completed, service nodes consider the update of cache. After the content is returned, the service ends. This paper aims to minimize system overhead through proper offloading and caching decisions, as well as the allocation of communication and computing resources. Thus, the optimization goal is expressed as:
$$ {\displaystyle \begin{array}{c}\underset{\mathbf{\mathcal{A}},\mathbf{\mathcal{C}},\mathbf{\mathcal{P}},\boldsymbol{\mathcal{F}},\mathbf{\mathcal{A}\mathcal{G}}}{\min }U\left(\mathbf{\mathcal{A}},\mathbf{\mathcal{C}},\mathbf{\mathcal{P}},\boldsymbol{\mathcal{F}},\mathbf{\mathcal{A}\mathcal{G}}\right)\\ {}=\sum \limits_{i=1}^N{hit}_{j,i}{u}_i^{cache}+\left(1-{hit}_{i,j}\right){g}_{j,i}\left[\left(1-{a}_i\right){u}_i^l+{a}_i{u}_i^{off}\right]\\ {}\kern12em \\ {}=\sum \limits_{i=1}^N{hit}_{j,i}\alpha \frac{s_i}{f_j^i}+\left(1-{hit}_{i,j}\right){g}_{j,i}\left\{\left(1-{a}_i\right)\left[\alpha \frac{s_i}{F_j^i}+\beta \mathbf{\mathcal{K}}{s}_i{\left({f}_i^l\right)}^2\right]\right.\\ {}\left.+{a}_i\left[\alpha \left(\frac{d_i}{r_{ij}}+\frac{s_i}{f_j^i}\right)+\beta {p}_i\frac{d_i}{r_{ij}}\right]\right\}\end{array}} $$
(8)
$$ s.t.\kern0.5em C1:{a}_i\in \left\{0,1\right\},\forall i\in \mathbf{\mathcal{N}} $$
(9)
$$ C2:{c}_{i,k}\in \left\{0,1\right\},\forall i\in \mathbf{\mathcal{N}},k\in \mathbf{\mathcal{K}} $$
(10)
$$ C3:{g}_{j,i}\in \left\{0,1\right\},\forall i\in \mathbf{\mathcal{N}} $$
(11)
$$ C4:0<{p}_i<{p}_{\mathrm{max}},\forall i\in \mathbf{\mathcal{N}} $$
(12)
$$ C5:{f}_j^i>0,\forall i\in \mathbf{\mathcal{N}} $$
(13)
$$ C6:\sum \limits_{i\in N}{a}_i{f}_j^i\le {F}_j^{\mathrm{max}},\forall i\in \mathbf{\mathcal{N}},j\in \boldsymbol{\mathcal{M}} $$
(14)
$$ C7:\left(1-{a}_i\right){t}_i^{local}+{a}_i{t}_i^{off}\le \min \left\{{t}_i^{\mathrm{max}},\frac{L_j}{V_u},\frac{d_{\mathrm{i} nterrupt}}{\left|{V}_u-{V}_v\right|}\right\},\forall i\in \mathbf{\mathcal{N}} $$
(15)
$$ C8:\sum \limits_{i=1}^N{g}_{j,i}{d}_i\le {H}_j $$
(16)
where \( \mathbf{\mathcal{A}} \) represents the offloading decision set of all task request vehicles. \( \mathbf{\mathcal{C}} \) represents the channel allocation status; \( \mathbf{\mathcal{P}} \) is the task transmission power set of offloaded vehicles. ℱ is the computing resource allocation strategy, and \( \mathbf{\mathcal{AG}} \) represents the cache decision of service nodes.
In equations (9) to (16), constraints C1 and C3 indicate that the offloading decision is a 0-1 decision. C2 indicates that the channel allocation matrix is a binary variable. C4 ensures that the power distribution is non-negative and does not exceed the range of uplink transmission power. C5 and C6 indicate that the computing resource allocation does not exceed the maximum computing capacity of service nodes. C7 represents the delay constraint, where Lj is the coverage of RSUj and Vu is the moving speed of vehicle requested by tasks. Vv is the moving speed of service vehicles, and dinterrupt is the maximum interruption distance. C8 indicates that the cache content of service nodes cannot exceed its maximum cache capacity.